Test Report: KVM_Linux_crio 21975

                    
                      bf5d9cb38ae1a2b3e4a9e22e363e3b0c86085c7c:2025-11-24:42481
                    
                

Test fail (3/345)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.82
244 TestPreload 153.6
325 TestPause/serial/SecondStartNoReconfiguration 67.83
x
+
TestAddons/parallel/Ingress (158.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-775116 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-775116 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-775116 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [68f27977-c2e9-4ef1-9e72-5688758a7fd4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [68f27977-c2e9-4ef1-9e72-5688758a7fd4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004812043s
I1124 02:41:37.677519  189749 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-775116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.151712091s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-775116 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.95
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-775116 -n addons-775116
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-775116 logs -n 25: (1.095045449s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-824012                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-824012 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ start   │ --download-only -p binary-mirror-467261 --alsologtostderr --binary-mirror http://127.0.0.1:44003 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-467261 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │                     │
	│ delete  │ -p binary-mirror-467261                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-467261 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ addons  │ enable dashboard -p addons-775116                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │                     │
	│ addons  │ disable dashboard -p addons-775116                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │                     │
	│ start   │ -p addons-775116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:40 UTC │
	│ addons  │ addons-775116 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:40 UTC │ 24 Nov 25 02:40 UTC │
	│ addons  │ addons-775116 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:40 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ addons-775116 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ addons-775116 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ enable headlamp -p addons-775116 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ ssh     │ addons-775116 ssh cat /opt/local-path-provisioner/pvc-ad5e62ee-345d-4806-badd-0fe8f1bfff03_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ addons-775116 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ addons-775116 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ ip      │ addons-775116 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ addons-775116 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ addons-775116 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ addons-775116 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ addons-775116 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-775116                                                                                                                                                                                                                                                                                                                                                                                         │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ addons  │ addons-775116 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │ 24 Nov 25 02:41 UTC │
	│ ssh     │ addons-775116 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:41 UTC │                     │
	│ addons  │ addons-775116 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │ 24 Nov 25 02:42 UTC │
	│ addons  │ addons-775116 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │ 24 Nov 25 02:42 UTC │
	│ ip      │ addons-775116 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-775116        │ jenkins │ v1.37.0 │ 24 Nov 25 02:43 UTC │ 24 Nov 25 02:43 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:38:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:38:36.521128  190426 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:38:36.521406  190426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:38:36.521416  190426 out.go:374] Setting ErrFile to fd 2...
	I1124 02:38:36.521420  190426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:38:36.521608  190426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 02:38:36.522110  190426 out.go:368] Setting JSON to false
	I1124 02:38:36.523019  190426 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8456,"bootTime":1763943460,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:38:36.523073  190426 start.go:143] virtualization: kvm guest
	I1124 02:38:36.524969  190426 out.go:179] * [addons-775116] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:38:36.526220  190426 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:38:36.526204  190426 notify.go:221] Checking for updates...
	I1124 02:38:36.528500  190426 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:38:36.529717  190426 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 02:38:36.530817  190426 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 02:38:36.531798  190426 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:38:36.532844  190426 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:38:36.534108  190426 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:38:36.563808  190426 out.go:179] * Using the kvm2 driver based on user configuration
	I1124 02:38:36.564883  190426 start.go:309] selected driver: kvm2
	I1124 02:38:36.564894  190426 start.go:927] validating driver "kvm2" against <nil>
	I1124 02:38:36.564905  190426 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:38:36.565617  190426 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:38:36.565831  190426 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 02:38:36.565858  190426 cni.go:84] Creating CNI manager for ""
	I1124 02:38:36.565901  190426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 02:38:36.565910  190426 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 02:38:36.565949  190426 start.go:353] cluster config:
	{Name:addons-775116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-775116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1124 02:38:36.566059  190426 iso.go:125] acquiring lock: {Name:mk63ee8f30093c8c7d0696dd2486a8eb0d8bd024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 02:38:36.567417  190426 out.go:179] * Starting "addons-775116" primary control-plane node in "addons-775116" cluster
	I1124 02:38:36.568404  190426 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 02:38:36.568429  190426 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 02:38:36.568437  190426 cache.go:65] Caching tarball of preloaded images
	I1124 02:38:36.568520  190426 preload.go:238] Found /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 02:38:36.568532  190426 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 02:38:36.568820  190426 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/config.json ...
	I1124 02:38:36.568840  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/config.json: {Name:mk495b4c5566e90b037a0c9f4d61e81849dbfd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:38:36.568967  190426 start.go:360] acquireMachinesLock for addons-775116: {Name:mk6edb9cd27540c3b670af896ffc377aa954769e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 02:38:36.569019  190426 start.go:364] duration metric: took 40.26µs to acquireMachinesLock for "addons-775116"
	I1124 02:38:36.569038  190426 start.go:93] Provisioning new machine with config: &{Name:addons-775116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-775116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 02:38:36.569083  190426 start.go:125] createHost starting for "" (driver="kvm2")
	I1124 02:38:36.571202  190426 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1124 02:38:36.571335  190426 start.go:159] libmachine.API.Create for "addons-775116" (driver="kvm2")
	I1124 02:38:36.571359  190426 client.go:173] LocalClient.Create starting
	I1124 02:38:36.571470  190426 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem
	I1124 02:38:36.649130  190426 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem
	I1124 02:38:36.729349  190426 main.go:143] libmachine: creating domain...
	I1124 02:38:36.729380  190426 main.go:143] libmachine: creating network...
	I1124 02:38:36.730820  190426 main.go:143] libmachine: found existing default network
	I1124 02:38:36.731009  190426 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1124 02:38:36.731541  190426 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce0a90}
	I1124 02:38:36.731621  190426 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-775116</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1124 02:38:36.737401  190426 main.go:143] libmachine: creating private network mk-addons-775116 192.168.39.0/24...
	I1124 02:38:36.805810  190426 main.go:143] libmachine: private network mk-addons-775116 192.168.39.0/24 created
	I1124 02:38:36.806084  190426 main.go:143] libmachine: <network>
	  <name>mk-addons-775116</name>
	  <uuid>1a0fe294-bd62-49d1-a4e6-b967734af99f</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:31:6e:6b'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1124 02:38:36.806109  190426 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116 ...
	I1124 02:38:36.806145  190426 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21975-185833/.minikube/cache/iso/amd64/minikube-v1.37.0-1763935228-21975-amd64.iso
	I1124 02:38:36.806156  190426 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 02:38:36.806230  190426 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21975-185833/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21975-185833/.minikube/cache/iso/amd64/minikube-v1.37.0-1763935228-21975-amd64.iso...
	I1124 02:38:37.100752  190426 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa...
	I1124 02:38:37.160886  190426 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/addons-775116.rawdisk...
	I1124 02:38:37.160934  190426 main.go:143] libmachine: Writing magic tar header
	I1124 02:38:37.160958  190426 main.go:143] libmachine: Writing SSH key tar header
	I1124 02:38:37.161037  190426 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116 ...
	I1124 02:38:37.161095  190426 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116
	I1124 02:38:37.161117  190426 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116 (perms=drwx------)
	I1124 02:38:37.161129  190426 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21975-185833/.minikube/machines
	I1124 02:38:37.161139  190426 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21975-185833/.minikube/machines (perms=drwxr-xr-x)
	I1124 02:38:37.161150  190426 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 02:38:37.161164  190426 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21975-185833/.minikube (perms=drwxr-xr-x)
	I1124 02:38:37.161176  190426 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21975-185833
	I1124 02:38:37.161193  190426 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21975-185833 (perms=drwxrwxr-x)
	I1124 02:38:37.161205  190426 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1124 02:38:37.161214  190426 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1124 02:38:37.161220  190426 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1124 02:38:37.161230  190426 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1124 02:38:37.161239  190426 main.go:143] libmachine: checking permissions on dir: /home
	I1124 02:38:37.161248  190426 main.go:143] libmachine: skipping /home - not owner
	I1124 02:38:37.161252  190426 main.go:143] libmachine: defining domain...
	I1124 02:38:37.162662  190426 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-775116</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/addons-775116.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-775116'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1124 02:38:37.167893  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:c2:9e:1d in network default
	I1124 02:38:37.168573  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:37.168595  190426 main.go:143] libmachine: starting domain...
	I1124 02:38:37.168599  190426 main.go:143] libmachine: ensuring networks are active...
	I1124 02:38:37.169403  190426 main.go:143] libmachine: Ensuring network default is active
	I1124 02:38:37.169864  190426 main.go:143] libmachine: Ensuring network mk-addons-775116 is active
	I1124 02:38:37.170617  190426 main.go:143] libmachine: getting domain XML...
	I1124 02:38:37.171858  190426 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-775116</name>
	  <uuid>23af4524-66cc-48f5-bd57-66cdcf8ba09a</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/addons-775116.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:05:17:dd'/>
	      <source network='mk-addons-775116'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:c2:9e:1d'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1124 02:38:38.405985  190426 main.go:143] libmachine: waiting for domain to start...
	I1124 02:38:38.407550  190426 main.go:143] libmachine: domain is now running
	I1124 02:38:38.407572  190426 main.go:143] libmachine: waiting for IP...
	I1124 02:38:38.408349  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:38.409062  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:38.409076  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:38.409398  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:38.409459  190426 retry.go:31] will retry after 263.952872ms: waiting for domain to come up
	I1124 02:38:38.674998  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:38.675894  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:38.675912  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:38.676239  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:38.676291  190426 retry.go:31] will retry after 284.010591ms: waiting for domain to come up
	I1124 02:38:38.962130  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:38.962923  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:38.962947  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:38.963308  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:38.963357  190426 retry.go:31] will retry after 374.302842ms: waiting for domain to come up
	I1124 02:38:39.339038  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:39.339741  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:39.339763  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:39.340128  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:39.340177  190426 retry.go:31] will retry after 389.305137ms: waiting for domain to come up
	I1124 02:38:39.730721  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:39.731339  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:39.731350  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:39.731675  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:39.731714  190426 retry.go:31] will retry after 655.782264ms: waiting for domain to come up
	I1124 02:38:40.389609  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:40.390547  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:40.390568  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:40.390985  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:40.391028  190426 retry.go:31] will retry after 802.130953ms: waiting for domain to come up
	I1124 02:38:41.194299  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:41.194997  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:41.195029  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:41.195327  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:41.195384  190426 retry.go:31] will retry after 955.224555ms: waiting for domain to come up
	I1124 02:38:42.152499  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:42.153129  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:42.153147  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:42.153452  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:42.153495  190426 retry.go:31] will retry after 1.479911945s: waiting for domain to come up
	I1124 02:38:43.634640  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:43.635301  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:43.635320  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:43.635628  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:43.635659  190426 retry.go:31] will retry after 1.25051578s: waiting for domain to come up
	I1124 02:38:44.887527  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:44.888194  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:44.888210  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:44.888562  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:44.888610  190426 retry.go:31] will retry after 1.925705424s: waiting for domain to come up
	I1124 02:38:46.815760  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:46.816523  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:46.816541  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:46.816868  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:46.816912  190426 retry.go:31] will retry after 2.832800755s: waiting for domain to come up
	I1124 02:38:49.652885  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:49.653662  190426 main.go:143] libmachine: no network interface addresses found for domain addons-775116 (source=lease)
	I1124 02:38:49.653680  190426 main.go:143] libmachine: trying to list again with source=arp
	I1124 02:38:49.653939  190426 main.go:143] libmachine: unable to find current IP address of domain addons-775116 in network mk-addons-775116 (interfaces detected: [])
	I1124 02:38:49.653973  190426 retry.go:31] will retry after 3.555570247s: waiting for domain to come up
	I1124 02:38:53.211681  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.212397  190426 main.go:143] libmachine: domain addons-775116 has current primary IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.212413  190426 main.go:143] libmachine: found domain IP: 192.168.39.95
	I1124 02:38:53.212428  190426 main.go:143] libmachine: reserving static IP address...
	I1124 02:38:53.212810  190426 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-775116", mac: "52:54:00:05:17:dd", ip: "192.168.39.95"} in network mk-addons-775116
	I1124 02:38:53.396227  190426 main.go:143] libmachine: reserved static IP address 192.168.39.95 for domain addons-775116
	I1124 02:38:53.396251  190426 main.go:143] libmachine: waiting for SSH...
	I1124 02:38:53.396259  190426 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 02:38:53.399457  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.400042  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:53.400069  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.400308  190426 main.go:143] libmachine: Using SSH client type: native
	I1124 02:38:53.400590  190426 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1124 02:38:53.400602  190426 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 02:38:53.516821  190426 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 02:38:53.517192  190426 main.go:143] libmachine: domain creation complete
	I1124 02:38:53.518990  190426 machine.go:94] provisionDockerMachine start ...
	I1124 02:38:53.521612  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.522038  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:53.522061  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.522260  190426 main.go:143] libmachine: Using SSH client type: native
	I1124 02:38:53.522524  190426 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1124 02:38:53.522536  190426 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 02:38:53.632972  190426 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 02:38:53.633007  190426 buildroot.go:166] provisioning hostname "addons-775116"
	I1124 02:38:53.636328  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.636804  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:53.636828  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.637023  190426 main.go:143] libmachine: Using SSH client type: native
	I1124 02:38:53.637229  190426 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1124 02:38:53.637241  190426 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-775116 && echo "addons-775116" | sudo tee /etc/hostname
	I1124 02:38:53.764311  190426 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-775116
	
	I1124 02:38:53.767151  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.767553  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:53.767574  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.767774  190426 main.go:143] libmachine: Using SSH client type: native
	I1124 02:38:53.767971  190426 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1124 02:38:53.767989  190426 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-775116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-775116/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-775116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 02:38:53.886855  190426 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 02:38:53.886905  190426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21975-185833/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-185833/.minikube}
	I1124 02:38:53.886928  190426 buildroot.go:174] setting up certificates
	I1124 02:38:53.886944  190426 provision.go:84] configureAuth start
	I1124 02:38:53.889973  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.890409  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:53.890431  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.892667  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.892984  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:53.893005  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:53.893126  190426 provision.go:143] copyHostCerts
	I1124 02:38:53.893200  190426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem (1078 bytes)
	I1124 02:38:53.893336  190426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem (1123 bytes)
	I1124 02:38:53.893407  190426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem (1675 bytes)
	I1124 02:38:53.893455  190426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem org=jenkins.addons-775116 san=[127.0.0.1 192.168.39.95 addons-775116 localhost minikube]
	I1124 02:38:54.038789  190426 provision.go:177] copyRemoteCerts
	I1124 02:38:54.038850  190426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 02:38:54.041538  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.041921  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:54.041948  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.042075  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:38:54.127877  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 02:38:54.155301  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 02:38:54.182109  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 02:38:54.208449  190426 provision.go:87] duration metric: took 321.488102ms to configureAuth
	I1124 02:38:54.208477  190426 buildroot.go:189] setting minikube options for container-runtime
	I1124 02:38:54.208651  190426 config.go:182] Loaded profile config "addons-775116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:38:54.211443  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.211811  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:54.211839  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.211999  190426 main.go:143] libmachine: Using SSH client type: native
	I1124 02:38:54.212183  190426 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1124 02:38:54.212199  190426 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 02:38:54.450667  190426 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 02:38:54.450730  190426 machine.go:97] duration metric: took 931.718383ms to provisionDockerMachine
	I1124 02:38:54.450748  190426 client.go:176] duration metric: took 17.87937964s to LocalClient.Create
	I1124 02:38:54.450774  190426 start.go:167] duration metric: took 17.87943659s to libmachine.API.Create "addons-775116"
	I1124 02:38:54.450789  190426 start.go:293] postStartSetup for "addons-775116" (driver="kvm2")
	I1124 02:38:54.450803  190426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 02:38:54.450900  190426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 02:38:54.453904  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.454386  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:54.454419  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.454568  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:38:54.540751  190426 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 02:38:54.545232  190426 info.go:137] Remote host: Buildroot 2025.02
	I1124 02:38:54.545258  190426 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/addons for local assets ...
	I1124 02:38:54.545329  190426 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/files for local assets ...
	I1124 02:38:54.545354  190426 start.go:296] duration metric: took 94.558348ms for postStartSetup
	I1124 02:38:54.548251  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.548673  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:54.548700  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.548907  190426 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/config.json ...
	I1124 02:38:54.549074  190426 start.go:128] duration metric: took 17.979981924s to createHost
	I1124 02:38:54.551256  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.551624  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:54.551650  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.551850  190426 main.go:143] libmachine: Using SSH client type: native
	I1124 02:38:54.552044  190426 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1124 02:38:54.552053  190426 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 02:38:54.662697  190426 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763951934.620945655
	
	I1124 02:38:54.662728  190426 fix.go:216] guest clock: 1763951934.620945655
	I1124 02:38:54.662737  190426 fix.go:229] Guest: 2025-11-24 02:38:54.620945655 +0000 UTC Remote: 2025-11-24 02:38:54.549085894 +0000 UTC m=+18.076638471 (delta=71.859761ms)
	I1124 02:38:54.662756  190426 fix.go:200] guest clock delta is within tolerance: 71.859761ms
	I1124 02:38:54.662761  190426 start.go:83] releasing machines lock for "addons-775116", held for 18.093731744s
	I1124 02:38:54.665565  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.665971  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:54.665989  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.666507  190426 ssh_runner.go:195] Run: cat /version.json
	I1124 02:38:54.666585  190426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 02:38:54.669545  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.669922  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:54.669927  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.669947  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.670142  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:38:54.670477  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:54.670521  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:54.670704  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:38:54.750809  190426 ssh_runner.go:195] Run: systemctl --version
	I1124 02:38:54.775345  190426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 02:38:54.930763  190426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 02:38:54.937909  190426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 02:38:54.937977  190426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 02:38:54.956531  190426 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 02:38:54.956559  190426 start.go:496] detecting cgroup driver to use...
	I1124 02:38:54.956624  190426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 02:38:54.975263  190426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 02:38:54.990470  190426 docker.go:218] disabling cri-docker service (if available) ...
	I1124 02:38:54.990557  190426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 02:38:55.006019  190426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 02:38:55.020646  190426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 02:38:55.159692  190426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 02:38:55.366088  190426 docker.go:234] disabling docker service ...
	I1124 02:38:55.366165  190426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 02:38:55.382007  190426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 02:38:55.396262  190426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 02:38:55.552330  190426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 02:38:55.693257  190426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 02:38:55.709149  190426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 02:38:55.730422  190426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 02:38:55.730497  190426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:38:55.742548  190426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 02:38:55.742625  190426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:38:55.754758  190426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:38:55.768542  190426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:38:55.780101  190426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 02:38:55.792411  190426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:38:55.804355  190426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:38:55.823795  190426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:38:55.837157  190426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 02:38:55.847043  190426 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 02:38:55.847113  190426 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 02:38:55.866468  190426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 02:38:55.877900  190426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:38:56.028657  190426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 02:38:56.502577  190426 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 02:38:56.502663  190426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 02:38:56.507853  190426 start.go:564] Will wait 60s for crictl version
	I1124 02:38:56.507911  190426 ssh_runner.go:195] Run: which crictl
	I1124 02:38:56.511639  190426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 02:38:56.542964  190426 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 02:38:56.543056  190426 ssh_runner.go:195] Run: crio --version
	I1124 02:38:56.569701  190426 ssh_runner.go:195] Run: crio --version
	I1124 02:38:56.597401  190426 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1124 02:38:56.601271  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:56.601722  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:38:56.601750  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:38:56.601982  190426 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1124 02:38:56.606019  190426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 02:38:56.620303  190426 kubeadm.go:884] updating cluster {Name:addons-775116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-775116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 02:38:56.620496  190426 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 02:38:56.620563  190426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 02:38:56.648084  190426 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1124 02:38:56.648150  190426 ssh_runner.go:195] Run: which lz4
	I1124 02:38:56.651878  190426 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 02:38:56.656160  190426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1124 02:38:56.656185  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1124 02:38:57.971425  190426 crio.go:462] duration metric: took 1.319574692s to copy over tarball
	I1124 02:38:57.971525  190426 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 02:38:59.538448  190426 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.56688217s)
	I1124 02:38:59.538486  190426 crio.go:469] duration metric: took 1.567027934s to extract the tarball
	I1124 02:38:59.538501  190426 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1124 02:38:59.579829  190426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 02:38:59.625936  190426 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 02:38:59.625964  190426 cache_images.go:86] Images are preloaded, skipping loading
	I1124 02:38:59.625974  190426 kubeadm.go:935] updating node { 192.168.39.95 8443 v1.34.1 crio true true} ...
	I1124 02:38:59.626112  190426 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-775116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-775116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 02:38:59.626206  190426 ssh_runner.go:195] Run: crio config
	I1124 02:38:59.671693  190426 cni.go:84] Creating CNI manager for ""
	I1124 02:38:59.671720  190426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 02:38:59.671742  190426 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 02:38:59.671765  190426 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-775116 NodeName:addons-775116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 02:38:59.671898  190426 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-775116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.95"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 02:38:59.671964  190426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 02:38:59.684285  190426 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 02:38:59.684402  190426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 02:38:59.696197  190426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1124 02:38:59.717222  190426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 02:38:59.737441  190426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 02:38:59.757362  190426 ssh_runner.go:195] Run: grep 192.168.39.95	control-plane.minikube.internal$ /etc/hosts
	I1124 02:38:59.761683  190426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 02:38:59.776005  190426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:38:59.918514  190426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 02:38:59.937784  190426 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116 for IP: 192.168.39.95
	I1124 02:38:59.937818  190426 certs.go:195] generating shared ca certs ...
	I1124 02:38:59.937847  190426 certs.go:227] acquiring lock for ca certs: {Name:mk173959192d8348177ca5710cbe68cc42fae47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:38:59.938032  190426 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key
	I1124 02:39:00.079938  190426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt ...
	I1124 02:39:00.079974  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt: {Name:mk27e99b8e5649f2c8e8f845df1e6551cc14428b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:00.080784  190426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key ...
	I1124 02:39:00.080808  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key: {Name:mk78008010612903a29431f3b2ab18445fce89cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:00.080897  190426 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key
	I1124 02:39:00.169281  190426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.crt ...
	I1124 02:39:00.169314  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.crt: {Name:mkc825ef9d2f19e3cff0176c5046ab65687a85ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:00.170046  190426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key ...
	I1124 02:39:00.170065  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key: {Name:mkf6d6544fe5347205b33b4f580da1305935a836 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:00.170538  190426 certs.go:257] generating profile certs ...
	I1124 02:39:00.170609  190426 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.key
	I1124 02:39:00.170623  190426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt with IP's: []
	I1124 02:39:00.282345  190426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt ...
	I1124 02:39:00.282386  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: {Name:mkd47544fb27eede64888bce12c4efbd8176970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:00.283184  190426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.key ...
	I1124 02:39:00.283201  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.key: {Name:mkf326967ced04db7e77e9a16d7812dc9fa222ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:00.283686  190426 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.key.795d56e8
	I1124 02:39:00.283737  190426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.crt.795d56e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.95]
	I1124 02:39:00.434481  190426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.crt.795d56e8 ...
	I1124 02:39:00.434521  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.crt.795d56e8: {Name:mkf1d24e277e6b89af2ae6595aaf461d32e760cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:00.434701  190426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.key.795d56e8 ...
	I1124 02:39:00.434715  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.key.795d56e8: {Name:mk6967dff2c271449dcd44d7cd100dd68cec0902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:00.434791  190426 certs.go:382] copying /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.crt.795d56e8 -> /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.crt
	I1124 02:39:00.434864  190426 certs.go:386] copying /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.key.795d56e8 -> /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.key
	I1124 02:39:00.434914  190426 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/proxy-client.key
	I1124 02:39:00.434933  190426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/proxy-client.crt with IP's: []
	I1124 02:39:00.491204  190426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/proxy-client.crt ...
	I1124 02:39:00.491232  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/proxy-client.crt: {Name:mk08802b90bb4c8a9411582b250efb6b0ebe7e03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:00.491409  190426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/proxy-client.key ...
	I1124 02:39:00.491427  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/proxy-client.key: {Name:mk591bd8fe888d0f04f4da2ac0a6c25f879f8220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:00.491608  190426 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 02:39:00.491645  190426 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem (1078 bytes)
	I1124 02:39:00.491669  190426 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem (1123 bytes)
	I1124 02:39:00.491692  190426 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem (1675 bytes)
	I1124 02:39:00.492298  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 02:39:00.523021  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 02:39:00.553107  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 02:39:00.583511  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 02:39:00.616340  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 02:39:00.647122  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 02:39:00.681010  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 02:39:00.710395  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 02:39:00.739193  190426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 02:39:00.768269  190426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 02:39:00.789071  190426 ssh_runner.go:195] Run: openssl version
	I1124 02:39:00.795482  190426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 02:39:00.808761  190426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:39:00.813803  190426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:39 /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:39:00.813881  190426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:39:00.821236  190426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 02:39:00.834456  190426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 02:39:00.839242  190426 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 02:39:00.839312  190426 kubeadm.go:401] StartCluster: {Name:addons-775116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-775116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:39:00.839418  190426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:39:00.839516  190426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:39:00.874550  190426 cri.go:89] found id: ""
	I1124 02:39:00.874653  190426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 02:39:00.887073  190426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 02:39:00.899948  190426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 02:39:00.911947  190426 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 02:39:00.911972  190426 kubeadm.go:158] found existing configuration files:
	
	I1124 02:39:00.912019  190426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 02:39:00.923421  190426 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 02:39:00.923495  190426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 02:39:00.936033  190426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 02:39:00.946967  190426 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 02:39:00.947023  190426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 02:39:00.958763  190426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 02:39:00.969545  190426 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 02:39:00.969611  190426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 02:39:00.981725  190426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 02:39:00.993113  190426 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 02:39:00.993161  190426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 02:39:01.005137  190426 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1124 02:39:01.154028  190426 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 02:39:12.801455  190426 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 02:39:12.801556  190426 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 02:39:12.801678  190426 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 02:39:12.801805  190426 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 02:39:12.801964  190426 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 02:39:12.802072  190426 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 02:39:12.803120  190426 out.go:252]   - Generating certificates and keys ...
	I1124 02:39:12.803230  190426 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 02:39:12.803286  190426 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 02:39:12.803362  190426 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 02:39:12.803433  190426 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 02:39:12.803497  190426 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 02:39:12.803540  190426 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 02:39:12.803588  190426 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 02:39:12.803688  190426 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-775116 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	I1124 02:39:12.803730  190426 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 02:39:12.803824  190426 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-775116 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	I1124 02:39:12.803910  190426 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 02:39:12.804000  190426 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 02:39:12.804053  190426 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 02:39:12.804103  190426 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 02:39:12.804165  190426 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 02:39:12.804228  190426 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 02:39:12.804281  190426 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 02:39:12.804338  190426 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 02:39:12.804392  190426 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 02:39:12.804463  190426 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 02:39:12.804530  190426 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 02:39:12.806537  190426 out.go:252]   - Booting up control plane ...
	I1124 02:39:12.806615  190426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 02:39:12.806702  190426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 02:39:12.806805  190426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 02:39:12.806930  190426 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 02:39:12.807038  190426 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 02:39:12.807144  190426 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 02:39:12.807269  190426 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 02:39:12.807318  190426 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 02:39:12.807478  190426 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 02:39:12.807627  190426 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 02:39:12.807709  190426 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.809779ms
	I1124 02:39:12.807861  190426 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 02:39:12.807954  190426 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.95:8443/livez
	I1124 02:39:12.808029  190426 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 02:39:12.808101  190426 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 02:39:12.808185  190426 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.782837042s
	I1124 02:39:12.808267  190426 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.898640752s
	I1124 02:39:12.808343  190426 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501382431s
	I1124 02:39:12.808498  190426 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 02:39:12.808702  190426 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 02:39:12.808817  190426 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 02:39:12.809019  190426 kubeadm.go:319] [mark-control-plane] Marking the node addons-775116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 02:39:12.809070  190426 kubeadm.go:319] [bootstrap-token] Using token: zal3ie.0zmsw6bbkk7chj41
	I1124 02:39:12.811174  190426 out.go:252]   - Configuring RBAC rules ...
	I1124 02:39:12.811336  190426 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 02:39:12.811495  190426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 02:39:12.811673  190426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 02:39:12.811919  190426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 02:39:12.812104  190426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 02:39:12.812251  190426 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 02:39:12.812439  190426 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 02:39:12.812520  190426 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 02:39:12.812598  190426 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 02:39:12.812610  190426 kubeadm.go:319] 
	I1124 02:39:12.812691  190426 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 02:39:12.812708  190426 kubeadm.go:319] 
	I1124 02:39:12.812821  190426 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 02:39:12.812832  190426 kubeadm.go:319] 
	I1124 02:39:12.812857  190426 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 02:39:12.812970  190426 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 02:39:12.813054  190426 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 02:39:12.813064  190426 kubeadm.go:319] 
	I1124 02:39:12.813138  190426 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 02:39:12.813150  190426 kubeadm.go:319] 
	I1124 02:39:12.813230  190426 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 02:39:12.813240  190426 kubeadm.go:319] 
	I1124 02:39:12.813325  190426 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 02:39:12.813456  190426 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 02:39:12.813554  190426 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 02:39:12.813566  190426 kubeadm.go:319] 
	I1124 02:39:12.813680  190426 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 02:39:12.813787  190426 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 02:39:12.813798  190426 kubeadm.go:319] 
	I1124 02:39:12.813899  190426 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zal3ie.0zmsw6bbkk7chj41 \
	I1124 02:39:12.814039  190426 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:40f9f8d245e87dfcd676995f2f148799897721892812b70c22eda7d58a9ddc01 \
	I1124 02:39:12.814066  190426 kubeadm.go:319] 	--control-plane 
	I1124 02:39:12.814071  190426 kubeadm.go:319] 
	I1124 02:39:12.814191  190426 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 02:39:12.814201  190426 kubeadm.go:319] 
	I1124 02:39:12.814322  190426 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zal3ie.0zmsw6bbkk7chj41 \
	I1124 02:39:12.814502  190426 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:40f9f8d245e87dfcd676995f2f148799897721892812b70c22eda7d58a9ddc01 
	I1124 02:39:12.814525  190426 cni.go:84] Creating CNI manager for ""
	I1124 02:39:12.814534  190426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 02:39:12.817672  190426 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 02:39:12.818775  190426 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 02:39:12.832172  190426 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 02:39:12.857982  190426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 02:39:12.858181  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-775116 minikube.k8s.io/updated_at=2025_11_24T02_39_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=addons-775116 minikube.k8s.io/primary=true
	I1124 02:39:12.858195  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:13.063429  190426 ops.go:34] apiserver oom_adj: -16
	I1124 02:39:13.063464  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:13.564551  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:14.063721  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:14.563723  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:15.063915  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:15.564279  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:16.063589  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:16.563817  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:17.063549  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:17.563950  190426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:39:17.638484  190426 kubeadm.go:1114] duration metric: took 4.780469254s to wait for elevateKubeSystemPrivileges
	I1124 02:39:17.638534  190426 kubeadm.go:403] duration metric: took 16.799227375s to StartCluster
	I1124 02:39:17.638564  190426 settings.go:142] acquiring lock: {Name:mk66e7c24245b8d0d5ec4dc3d788350fb3f2b31a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:17.639362  190426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 02:39:17.639787  190426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/kubeconfig: {Name:mkcda9156e9d84203343cbeb8993f30147e2224f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:39:17.640547  190426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 02:39:17.640591  190426 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 02:39:17.640635  190426 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 02:39:17.640788  190426 addons.go:70] Setting gcp-auth=true in profile "addons-775116"
	I1124 02:39:17.640805  190426 addons.go:70] Setting ingress-dns=true in profile "addons-775116"
	I1124 02:39:17.640805  190426 config.go:182] Loaded profile config "addons-775116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:39:17.640821  190426 addons.go:239] Setting addon ingress-dns=true in "addons-775116"
	I1124 02:39:17.640832  190426 mustload.go:66] Loading cluster: addons-775116
	I1124 02:39:17.640819  190426 addons.go:70] Setting cloud-spanner=true in profile "addons-775116"
	I1124 02:39:17.640857  190426 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-775116"
	I1124 02:39:17.640866  190426 addons.go:239] Setting addon cloud-spanner=true in "addons-775116"
	I1124 02:39:17.640875  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.640900  190426 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-775116"
	I1124 02:39:17.640897  190426 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-775116"
	I1124 02:39:17.640915  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.640924  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.640926  190426 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-775116"
	I1124 02:39:17.640904  190426 addons.go:70] Setting default-storageclass=true in profile "addons-775116"
	I1124 02:39:17.640959  190426 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-775116"
	I1124 02:39:17.640972  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.641019  190426 config.go:182] Loaded profile config "addons-775116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:39:17.641600  190426 addons.go:70] Setting registry-creds=true in profile "addons-775116"
	I1124 02:39:17.641632  190426 addons.go:239] Setting addon registry-creds=true in "addons-775116"
	I1124 02:39:17.641661  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.641706  190426 addons.go:70] Setting metrics-server=true in profile "addons-775116"
	I1124 02:39:17.641734  190426 addons.go:239] Setting addon metrics-server=true in "addons-775116"
	I1124 02:39:17.641769  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.641856  190426 addons.go:70] Setting storage-provisioner=true in profile "addons-775116"
	I1124 02:39:17.641879  190426 addons.go:239] Setting addon storage-provisioner=true in "addons-775116"
	I1124 02:39:17.641909  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.642190  190426 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-775116"
	I1124 02:39:17.642208  190426 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-775116"
	I1124 02:39:17.640790  190426 addons.go:70] Setting yakd=true in profile "addons-775116"
	I1124 02:39:17.642247  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.642252  190426 addons.go:239] Setting addon yakd=true in "addons-775116"
	I1124 02:39:17.642279  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.642571  190426 addons.go:70] Setting inspektor-gadget=true in profile "addons-775116"
	I1124 02:39:17.642591  190426 addons.go:239] Setting addon inspektor-gadget=true in "addons-775116"
	I1124 02:39:17.642615  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.642716  190426 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-775116"
	I1124 02:39:17.642745  190426 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-775116"
	I1124 02:39:17.643063  190426 addons.go:70] Setting registry=true in profile "addons-775116"
	I1124 02:39:17.643087  190426 addons.go:239] Setting addon registry=true in "addons-775116"
	I1124 02:39:17.643088  190426 addons.go:70] Setting volcano=true in profile "addons-775116"
	I1124 02:39:17.643108  190426 addons.go:239] Setting addon volcano=true in "addons-775116"
	I1124 02:39:17.643112  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.643134  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.643194  190426 addons.go:70] Setting volumesnapshots=true in profile "addons-775116"
	I1124 02:39:17.643215  190426 addons.go:239] Setting addon volumesnapshots=true in "addons-775116"
	I1124 02:39:17.643241  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.640800  190426 addons.go:70] Setting ingress=true in profile "addons-775116"
	I1124 02:39:17.643424  190426 addons.go:239] Setting addon ingress=true in "addons-775116"
	I1124 02:39:17.643457  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.643856  190426 out.go:179] * Verifying Kubernetes components...
	I1124 02:39:17.645330  190426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:39:17.646854  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.648693  190426 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 02:39:17.649049  190426 addons.go:239] Setting addon default-storageclass=true in "addons-775116"
	I1124 02:39:17.649090  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.649381  190426 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 02:39:17.649417  190426 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 02:39:17.649369  190426 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 02:39:17.649981  190426 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 02:39:17.650027  190426 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 02:39:17.650063  190426 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 02:39:17.650610  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 02:39:17.650634  190426 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 02:39:17.651110  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 02:39:17.651400  190426 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 02:39:17.651418  190426 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 02:39:17.651426  190426 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	W1124 02:39:17.651777  190426 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 02:39:17.651445  190426 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 02:39:17.651799  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 02:39:17.651823  190426 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-775116"
	I1124 02:39:17.651864  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:17.651466  190426 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 02:39:17.651460  190426 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 02:39:17.652152  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 02:39:17.652413  190426 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 02:39:17.652443  190426 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 02:39:17.653096  190426 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 02:39:17.653121  190426 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 02:39:17.653407  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 02:39:17.653121  190426 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 02:39:17.653509  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 02:39:17.653765  190426 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 02:39:17.654398  190426 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 02:39:17.654433  190426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 02:39:17.654451  190426 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 02:39:17.654465  190426 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 02:39:17.654494  190426 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 02:39:17.654512  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 02:39:17.654542  190426 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 02:39:17.654667  190426 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 02:39:17.655756  190426 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 02:39:17.655764  190426 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 02:39:17.655773  190426 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 02:39:17.655815  190426 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 02:39:17.656712  190426 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 02:39:17.656729  190426 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 02:39:17.656828  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 02:39:17.657658  190426 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 02:39:17.658583  190426 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 02:39:17.658612  190426 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 02:39:17.659552  190426 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 02:39:17.659834  190426 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 02:39:17.659850  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 02:39:17.660912  190426 out.go:179]   - Using image docker.io/busybox:stable
	I1124 02:39:17.661116  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.661768  190426 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 02:39:17.661786  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 02:39:17.661817  190426 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 02:39:17.662667  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.662815  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.662869  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.662665  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.663257  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.663699  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.663741  190426 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 02:39:17.664310  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.664626  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.664662  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.665004  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.665341  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.665398  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.665455  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.665486  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.665520  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.665659  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.665697  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.665760  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.666053  190426 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 02:39:17.666252  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.666272  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.666666  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.666742  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.666774  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.666919  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.667334  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.667367  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.667404  190426 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 02:39:17.667416  190426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 02:39:17.667492  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.667351  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.667894  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.667937  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.668415  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.668453  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.668475  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.668831  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.669164  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.669218  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.669249  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.669352  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.669446  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.669519  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.669470  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.669624  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.670105  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.670342  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.670763  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.670797  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.671204  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.671663  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.672200  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.672238  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.672440  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.672495  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.672955  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.672980  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.673192  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:17.673513  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.673843  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:17.673874  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:17.674071  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:18.014624  190426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 02:39:18.032625  190426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 02:39:18.329336  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 02:39:18.345230  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 02:39:18.361394  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 02:39:18.378662  190426 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 02:39:18.378693  190426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 02:39:18.397266  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 02:39:18.407759  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 02:39:18.412420  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 02:39:18.414200  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 02:39:18.418364  190426 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 02:39:18.418392  190426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 02:39:18.421128  190426 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 02:39:18.421149  190426 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 02:39:18.435585  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 02:39:18.436944  190426 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 02:39:18.436968  190426 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 02:39:18.521301  190426 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 02:39:18.521323  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 02:39:18.531001  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 02:39:18.541906  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 02:39:18.676302  190426 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 02:39:18.676331  190426 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 02:39:18.699004  190426 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 02:39:18.699030  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 02:39:18.704547  190426 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 02:39:18.704580  190426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 02:39:18.755237  190426 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 02:39:18.755279  190426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 02:39:18.874109  190426 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 02:39:18.874143  190426 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 02:39:19.023174  190426 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 02:39:19.023204  190426 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 02:39:19.034133  190426 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 02:39:19.034156  190426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 02:39:19.036171  190426 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 02:39:19.036188  190426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 02:39:19.057937  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 02:39:19.237747  190426 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 02:39:19.237776  190426 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 02:39:19.327088  190426 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 02:39:19.327116  190426 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 02:39:19.333627  190426 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 02:39:19.333647  190426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 02:39:19.355682  190426 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 02:39:19.355707  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 02:39:19.707860  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 02:39:19.858707  190426 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 02:39:19.858745  190426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 02:39:19.881948  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 02:39:19.922491  190426 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 02:39:19.922517  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 02:39:19.936721  190426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.922048396s)
	I1124 02:39:19.936754  190426 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1124 02:39:19.936815  190426 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.904147744s)
	I1124 02:39:19.937794  190426 node_ready.go:35] waiting up to 6m0s for node "addons-775116" to be "Ready" ...
	I1124 02:39:19.948843  190426 node_ready.go:49] node "addons-775116" is "Ready"
	I1124 02:39:19.948867  190426 node_ready.go:38] duration metric: took 11.044227ms for node "addons-775116" to be "Ready" ...
	I1124 02:39:19.948881  190426 api_server.go:52] waiting for apiserver process to appear ...
	I1124 02:39:19.948926  190426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:39:20.300552  190426 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 02:39:20.300589  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 02:39:20.515180  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 02:39:20.561418  190426 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-775116" context rescaled to 1 replicas
	I1124 02:39:20.685130  190426 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 02:39:20.685163  190426 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 02:39:21.168547  190426 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 02:39:21.168572  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 02:39:21.579367  190426 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 02:39:21.579419  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 02:39:21.848656  190426 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 02:39:21.848692  190426 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 02:39:22.143870  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 02:39:24.778713  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.449329235s)
	I1124 02:39:24.778808  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.433544402s)
	I1124 02:39:25.060410  190426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 02:39:25.063706  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:25.064183  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:25.064224  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:25.064450  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:25.482613  190426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 02:39:25.618959  190426 addons.go:239] Setting addon gcp-auth=true in "addons-775116"
	I1124 02:39:25.619026  190426 host.go:66] Checking if "addons-775116" exists ...
	I1124 02:39:25.621098  190426 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 02:39:25.623533  190426 main.go:143] libmachine: domain addons-775116 has defined MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:25.623955  190426 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:17:dd", ip: ""} in network mk-addons-775116: {Iface:virbr1 ExpiryTime:2025-11-24 03:38:51 +0000 UTC Type:0 Mac:52:54:00:05:17:dd Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:addons-775116 Clientid:01:52:54:00:05:17:dd}
	I1124 02:39:25.623984  190426 main.go:143] libmachine: domain addons-775116 has defined IP address 192.168.39.95 and MAC address 52:54:00:05:17:dd in network mk-addons-775116
	I1124 02:39:25.624145  190426 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/addons-775116/id_rsa Username:docker}
	I1124 02:39:25.702898  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.341461142s)
	I1124 02:39:25.702935  190426 addons.go:495] Verifying addon ingress=true in "addons-775116"
	I1124 02:39:25.702969  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.305663652s)
	I1124 02:39:25.703061  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.290611081s)
	I1124 02:39:25.703115  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.295324s)
	I1124 02:39:25.703255  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.289021545s)
	I1124 02:39:25.703291  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.267680875s)
	I1124 02:39:25.703342  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.172307688s)
	I1124 02:39:25.703416  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.161477676s)
	I1124 02:39:25.703455  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.645483683s)
	I1124 02:39:25.703474  190426 addons.go:495] Verifying addon registry=true in "addons-775116"
	I1124 02:39:25.703510  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.995616462s)
	I1124 02:39:25.703541  190426 addons.go:495] Verifying addon metrics-server=true in "addons-775116"
	I1124 02:39:25.703606  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.821616887s)
	I1124 02:39:25.703632  190426 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.754694818s)
	I1124 02:39:25.704043  190426 api_server.go:72] duration metric: took 8.063411207s to wait for apiserver process to appear ...
	I1124 02:39:25.704054  190426 api_server.go:88] waiting for apiserver healthz status ...
	I1124 02:39:25.704075  190426 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I1124 02:39:25.704523  190426 out.go:179] * Verifying ingress addon...
	I1124 02:39:25.705261  190426 out.go:179] * Verifying registry addon...
	I1124 02:39:25.705256  190426 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-775116 service yakd-dashboard -n yakd-dashboard
	
	I1124 02:39:25.706930  190426 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 02:39:25.707358  190426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 02:39:25.726159  190426 api_server.go:279] https://192.168.39.95:8443/healthz returned 200:
	ok
	I1124 02:39:25.738771  190426 api_server.go:141] control plane version: v1.34.1
	I1124 02:39:25.738799  190426 api_server.go:131] duration metric: took 34.738809ms to wait for apiserver health ...
	I1124 02:39:25.738808  190426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 02:39:25.748151  190426 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 02:39:25.748188  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:25.755121  190426 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 02:39:25.755147  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1124 02:39:25.785523  190426 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1124 02:39:25.787281  190426 system_pods.go:59] 15 kube-system pods found
	I1124 02:39:25.787316  190426 system_pods.go:61] "amd-gpu-device-plugin-z2tw8" [79a82507-05b4-487d-94bb-6af479fcb8b7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:39:25.787322  190426 system_pods.go:61] "coredns-66bc5c9577-qz7nr" [f7a129b9-37bd-4d5d-86b0-5b27df489ff9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:39:25.787330  190426 system_pods.go:61] "coredns-66bc5c9577-vcl2x" [5742c0ea-d4aa-46f6-b72f-ba0a9817e7f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:39:25.787335  190426 system_pods.go:61] "etcd-addons-775116" [da82e7e5-54a7-4ee2-b5df-fe560ae7e8ca] Running
	I1124 02:39:25.787339  190426 system_pods.go:61] "kube-apiserver-addons-775116" [3338006d-411e-4a4a-ad4e-a30e55609de2] Running
	I1124 02:39:25.787343  190426 system_pods.go:61] "kube-controller-manager-addons-775116" [13146270-7253-471a-8f5a-f0358b0142bc] Running
	I1124 02:39:25.787349  190426 system_pods.go:61] "kube-ingress-dns-minikube" [524486a5-e1af-4d71-a006-7fba106a887f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:39:25.787354  190426 system_pods.go:61] "kube-proxy-tchwq" [08987ca8-5300-448b-9cbd-e202a3e4efa7] Running
	I1124 02:39:25.787359  190426 system_pods.go:61] "kube-scheduler-addons-775116" [51c55ad9-531c-4589-9a24-3e9ac52f4c7d] Running
	I1124 02:39:25.787366  190426 system_pods.go:61] "metrics-server-85b7d694d7-bh5pm" [37ba89e4-4050-4ee0-94e4-767ac24d4f1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:39:25.787385  190426 system_pods.go:61] "nvidia-device-plugin-daemonset-5pz67" [0859f4f5-557a-4bbe-a610-e1d991b4d68d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:39:25.787400  190426 system_pods.go:61] "registry-6b586f9694-r9pj2" [2b90c53d-da92-4cbd-b0c1-9bdc2175baac] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:39:25.787407  190426 system_pods.go:61] "registry-creds-764b6fb674-b57rq" [aa3f231e-2ef8-4968-9220-743d68a948df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:39:25.787412  190426 system_pods.go:61] "registry-proxy-tgmzc" [820cb950-bbf5-4368-a91e-279938b4d42c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:39:25.787417  190426 system_pods.go:61] "storage-provisioner" [ee14bfe7-27ce-45c4-b62b-7e46a12726ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 02:39:25.787425  190426 system_pods.go:74] duration metric: took 48.610261ms to wait for pod list to return data ...
	I1124 02:39:25.787434  190426 default_sa.go:34] waiting for default service account to be created ...
	I1124 02:39:25.806504  190426 default_sa.go:45] found service account: "default"
	I1124 02:39:25.806529  190426 default_sa.go:55] duration metric: took 19.090056ms for default service account to be created ...
	I1124 02:39:25.806540  190426 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 02:39:25.904642  190426 system_pods.go:86] 15 kube-system pods found
	I1124 02:39:25.904686  190426 system_pods.go:89] "amd-gpu-device-plugin-z2tw8" [79a82507-05b4-487d-94bb-6af479fcb8b7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:39:25.904696  190426 system_pods.go:89] "coredns-66bc5c9577-qz7nr" [f7a129b9-37bd-4d5d-86b0-5b27df489ff9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:39:25.904706  190426 system_pods.go:89] "coredns-66bc5c9577-vcl2x" [5742c0ea-d4aa-46f6-b72f-ba0a9817e7f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:39:25.904712  190426 system_pods.go:89] "etcd-addons-775116" [da82e7e5-54a7-4ee2-b5df-fe560ae7e8ca] Running
	I1124 02:39:25.904718  190426 system_pods.go:89] "kube-apiserver-addons-775116" [3338006d-411e-4a4a-ad4e-a30e55609de2] Running
	I1124 02:39:25.904724  190426 system_pods.go:89] "kube-controller-manager-addons-775116" [13146270-7253-471a-8f5a-f0358b0142bc] Running
	I1124 02:39:25.904735  190426 system_pods.go:89] "kube-ingress-dns-minikube" [524486a5-e1af-4d71-a006-7fba106a887f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:39:25.904741  190426 system_pods.go:89] "kube-proxy-tchwq" [08987ca8-5300-448b-9cbd-e202a3e4efa7] Running
	I1124 02:39:25.904746  190426 system_pods.go:89] "kube-scheduler-addons-775116" [51c55ad9-531c-4589-9a24-3e9ac52f4c7d] Running
	I1124 02:39:25.904756  190426 system_pods.go:89] "metrics-server-85b7d694d7-bh5pm" [37ba89e4-4050-4ee0-94e4-767ac24d4f1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:39:25.904775  190426 system_pods.go:89] "nvidia-device-plugin-daemonset-5pz67" [0859f4f5-557a-4bbe-a610-e1d991b4d68d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:39:25.904784  190426 system_pods.go:89] "registry-6b586f9694-r9pj2" [2b90c53d-da92-4cbd-b0c1-9bdc2175baac] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:39:25.904793  190426 system_pods.go:89] "registry-creds-764b6fb674-b57rq" [aa3f231e-2ef8-4968-9220-743d68a948df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:39:25.904800  190426 system_pods.go:89] "registry-proxy-tgmzc" [820cb950-bbf5-4368-a91e-279938b4d42c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:39:25.904808  190426 system_pods.go:89] "storage-provisioner" [ee14bfe7-27ce-45c4-b62b-7e46a12726ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 02:39:25.904820  190426 system_pods.go:126] duration metric: took 98.271489ms to wait for k8s-apps to be running ...
	I1124 02:39:25.904835  190426 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 02:39:25.904899  190426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:39:26.051385  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.536145927s)
	W1124 02:39:26.051425  190426 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 02:39:26.051452  190426 retry.go:31] will retry after 235.849349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 02:39:26.234182  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:26.234483  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:26.287554  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 02:39:26.768769  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:26.768940  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:26.844471  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.700507427s)
	I1124 02:39:26.844513  190426 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.223378115s)
	I1124 02:39:26.844535  190426 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-775116"
	I1124 02:39:26.844538  190426 system_svc.go:56] duration metric: took 939.697958ms WaitForService to wait for kubelet
	I1124 02:39:26.844556  190426 kubeadm.go:587] duration metric: took 9.203926317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 02:39:26.844580  190426 node_conditions.go:102] verifying NodePressure condition ...
	I1124 02:39:26.846114  190426 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 02:39:26.846121  190426 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 02:39:26.847213  190426 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 02:39:26.848090  190426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 02:39:26.848146  190426 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 02:39:26.848162  190426 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 02:39:26.880345  190426 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 02:39:26.880381  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:26.900059  190426 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 02:39:26.900087  190426 node_conditions.go:123] node cpu capacity is 2
	I1124 02:39:26.900124  190426 node_conditions.go:105] duration metric: took 55.536811ms to run NodePressure ...
	I1124 02:39:26.900140  190426 start.go:242] waiting for startup goroutines ...
	I1124 02:39:27.000120  190426 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 02:39:27.000151  190426 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 02:39:27.092053  190426 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 02:39:27.092078  190426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 02:39:27.166360  190426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 02:39:27.237446  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:27.237491  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:27.365966  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:27.717885  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:27.717942  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:27.854847  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:28.220044  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:28.220540  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:28.292367  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.004751562s)
	I1124 02:39:28.369043  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:28.651131  190426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.484714854s)
	I1124 02:39:28.652236  190426 addons.go:495] Verifying addon gcp-auth=true in "addons-775116"
	I1124 02:39:28.653887  190426 out.go:179] * Verifying gcp-auth addon...
	I1124 02:39:28.655823  190426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 02:39:28.712318  190426 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 02:39:28.712350  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:28.743923  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:28.745683  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:28.855725  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:29.165069  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:29.264883  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:29.266590  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:29.365362  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:29.659623  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:29.710897  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:29.710948  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:29.851978  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:30.159233  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:30.211097  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:30.211838  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:30.352677  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:30.660231  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:30.711054  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:30.711728  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:30.852393  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:31.161047  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:31.212906  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:31.214138  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:31.352437  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:31.660887  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:31.715007  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:31.715947  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:31.852636  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:32.161796  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:32.215595  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:32.215662  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:32.356414  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:32.660105  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:32.713057  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:32.713096  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:32.853789  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:33.159617  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:33.211864  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:33.214339  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:33.351882  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:33.661394  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:33.711130  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:33.714113  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:33.852582  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:34.160681  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:34.213231  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:34.213971  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:34.353219  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:34.659108  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:34.713881  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:34.714185  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:34.851543  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:35.159719  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:35.211712  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:35.212604  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:35.352143  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:35.659394  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:35.711615  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:35.712336  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:35.852221  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:36.159463  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:36.210330  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:36.212312  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:36.351725  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:36.659014  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:36.709861  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:36.710482  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:36.852212  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:37.159204  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:37.209772  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:37.210457  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:37.352184  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:37.660498  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:37.711022  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:37.712211  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:37.853287  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:38.159144  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:38.210839  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:38.210928  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:38.353638  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:38.659538  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:38.711159  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:38.712457  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:38.853152  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:39.159224  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:39.210389  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:39.210756  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:39.353430  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:39.661043  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:39.714198  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:39.715333  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:39.851499  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:40.159877  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:40.210761  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:40.211840  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:40.352311  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:40.659454  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:40.711051  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:40.711201  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:40.851929  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:41.159138  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:41.227054  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:41.228517  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:41.351831  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:41.661823  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:41.762307  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:41.762314  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:41.853690  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:42.160178  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:42.210247  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:42.210666  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:42.352068  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:42.659431  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:42.711172  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:42.712176  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:42.851588  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:43.160265  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:43.210527  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:43.210756  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:43.352556  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:43.659780  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:43.710403  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:43.711348  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:43.852266  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:44.163019  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:44.209949  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:44.212405  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:44.352888  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:44.663550  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:44.710585  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:44.711650  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:44.853912  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:45.159623  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:45.214200  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:45.215123  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:45.354152  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:45.785254  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:45.787919  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:45.787951  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:45.851805  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:46.158979  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:46.214530  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:46.214624  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:46.352368  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:46.660164  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:46.712476  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:46.712877  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:46.853564  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:47.160990  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:47.213940  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:47.214131  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:47.351801  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:47.660929  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:47.713743  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:47.713939  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:48.011833  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:48.160089  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:48.210076  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:48.210345  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:48.352249  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:48.659789  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:48.714259  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:48.714677  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:48.852800  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:49.160182  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:49.211094  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:49.212539  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:49.352397  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:49.659764  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:49.714491  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:49.715912  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:49.852104  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:50.247212  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:50.250140  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:50.252531  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:50.354111  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:50.663954  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:50.712559  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:50.712758  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:50.853264  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:51.159495  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:51.294453  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:51.298239  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:51.352887  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:51.659644  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:51.711958  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:51.713485  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:51.851908  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:52.161871  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:52.211572  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:52.213134  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:52.353273  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:52.659259  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:52.710239  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:52.711021  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:52.851285  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:53.159616  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:53.213945  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:53.214186  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:53.352208  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:53.659632  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:53.710864  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:53.711868  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:53.852904  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:54.159262  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:54.212924  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:54.217012  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:54.356872  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:54.658819  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:54.712018  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:54.712987  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:54.854119  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:55.159812  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:55.211145  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:55.213596  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:55.354083  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:55.659829  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:55.711168  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:55.711223  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:55.851722  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:56.159871  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:56.219821  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:56.219901  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:56.352220  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:56.659123  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:56.709831  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:56.711229  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:56.851269  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:57.159331  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:57.213610  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:57.215512  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:57.351586  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:57.660120  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:57.709807  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:57.710246  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:57.853265  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:58.159437  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:58.212679  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:58.215143  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:58.354482  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:58.658787  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:58.713724  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:58.713864  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:58.853036  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:59.160323  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:59.211267  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:59.212421  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:59.354537  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:39:59.660120  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:39:59.710200  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:39:59.711653  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:39:59.852051  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:00.159202  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:00.210079  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:00.211228  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:40:00.352528  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:00.678815  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:00.710568  190426 kapi.go:107] duration metric: took 35.003204274s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 02:40:00.711825  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:00.852463  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:01.159661  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:01.211491  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:01.352246  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:01.665658  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:01.711507  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:01.852671  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:02.161382  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:02.211366  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:02.352310  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:02.660309  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:02.710666  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:02.854233  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:03.160534  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:03.211777  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:03.353091  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:03.659605  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:03.712479  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:03.853889  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:04.160770  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:04.261454  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:04.361154  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:04.659388  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:04.709857  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:04.852169  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:05.159986  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:05.210332  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:05.352131  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:05.663875  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:05.713952  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:05.852907  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:06.163262  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:06.213076  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:06.354846  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:06.659595  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:06.713099  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:06.852003  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:07.158656  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:07.211466  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:07.351829  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:07.659954  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:07.711634  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:07.852183  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:08.159198  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:08.212991  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:08.351952  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:08.659576  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:08.722559  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:08.852667  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:09.159329  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:09.213101  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:09.352701  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:09.659957  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:09.711409  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:09.853217  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:10.160110  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:10.210307  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:10.352021  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:10.660161  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:10.710644  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:10.851829  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:11.160063  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:11.211019  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:11.353855  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:11.660123  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:11.710436  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:11.851736  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:12.160077  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:12.211713  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:12.353774  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:12.664811  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:12.710589  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:12.853403  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:13.161422  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:13.212526  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:13.355290  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:13.661311  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:13.710774  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:13.853398  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:14.159436  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:14.212232  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:14.351397  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:14.660928  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:14.711760  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:14.851833  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:15.161607  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:15.211200  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:15.353427  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:15.660342  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:15.710551  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:15.853271  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:16.162132  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:16.214150  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:16.352954  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:16.666190  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:16.716063  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:16.851935  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:17.158923  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:17.212454  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:17.351834  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:17.661609  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:17.711194  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:17.851650  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:18.159751  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:18.213281  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:18.352760  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:18.662049  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:18.712554  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:18.969581  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:19.161214  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:19.261785  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:19.352362  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:19.661841  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:19.712600  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:19.852080  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:20.160395  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:20.211548  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:20.354393  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:20.660109  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:20.835086  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:20.859722  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:21.162075  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:21.261249  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:21.352475  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:21.659617  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:21.711304  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:21.853190  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:22.160038  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:22.211126  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:22.352624  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:22.662235  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:22.710306  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:22.851779  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:23.160056  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:23.210787  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:23.355824  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:23.659141  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:23.714201  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:23.851751  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:24.159893  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:24.262078  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:24.363364  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:24.660221  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:24.712143  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:24.853244  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:25.160438  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:25.211671  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:25.359053  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:25.659427  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:25.710867  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:25.853068  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:26.160036  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:26.212122  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:26.608442  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:26.658864  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:26.712277  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:26.855071  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:27.159814  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:27.212961  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:27.353662  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:27.660662  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:27.711742  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:27.854596  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:28.162513  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:28.265294  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:28.351444  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:28.663157  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:28.710581  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:28.852740  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:29.161008  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:29.210867  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:29.353162  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:29.662304  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:29.761789  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:29.851861  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:30.161172  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:30.212209  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:30.355383  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:30.660935  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:30.711018  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:30.854720  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:31.160778  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:31.261453  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:31.351725  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:31.659018  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:31.709680  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:31.855405  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:32.164276  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:32.211615  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:32.353165  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:40:32.663089  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:32.711906  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:32.855831  190426 kapi.go:107] duration metric: took 1m6.007736944s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 02:40:33.169004  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:33.217624  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:33.659571  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:33.711279  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:34.159981  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:34.211861  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:34.660954  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:34.714605  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:35.195284  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:35.211895  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:35.660581  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:35.711643  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:36.160959  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:36.259986  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:36.661979  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:36.712905  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:37.164385  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:37.210769  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:37.660321  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:37.710708  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:38.158956  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:38.211274  190426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:40:38.660467  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:38.713423  190426 kapi.go:107] duration metric: took 1m13.006488286s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 02:40:39.160551  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:39.658783  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:40.159845  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:40.659588  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:41.160900  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:41.661353  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:42.159733  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:42.659783  190426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:40:43.160121  190426 kapi.go:107] duration metric: took 1m14.504293992s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 02:40:43.161709  190426 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-775116 cluster.
	I1124 02:40:43.162747  190426 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 02:40:43.163757  190426 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 02:40:43.164715  190426 out.go:179] * Enabled addons: inspektor-gadget, ingress-dns, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, cloud-spanner, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1124 02:40:43.165505  190426 addons.go:530] duration metric: took 1m25.524876558s for enable addons: enabled=[inspektor-gadget ingress-dns amd-gpu-device-plugin storage-provisioner nvidia-device-plugin cloud-spanner registry-creds metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1124 02:40:43.165546  190426 start.go:247] waiting for cluster config update ...
	I1124 02:40:43.165565  190426 start.go:256] writing updated cluster config ...
	I1124 02:40:43.165825  190426 ssh_runner.go:195] Run: rm -f paused
	I1124 02:40:43.171671  190426 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 02:40:43.175948  190426 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qz7nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:43.180932  190426 pod_ready.go:94] pod "coredns-66bc5c9577-qz7nr" is "Ready"
	I1124 02:40:43.180970  190426 pod_ready.go:86] duration metric: took 4.996537ms for pod "coredns-66bc5c9577-qz7nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:43.184128  190426 pod_ready.go:83] waiting for pod "etcd-addons-775116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:43.189296  190426 pod_ready.go:94] pod "etcd-addons-775116" is "Ready"
	I1124 02:40:43.189326  190426 pod_ready.go:86] duration metric: took 5.17508ms for pod "etcd-addons-775116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:43.191896  190426 pod_ready.go:83] waiting for pod "kube-apiserver-addons-775116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:43.196924  190426 pod_ready.go:94] pod "kube-apiserver-addons-775116" is "Ready"
	I1124 02:40:43.196956  190426 pod_ready.go:86] duration metric: took 5.036183ms for pod "kube-apiserver-addons-775116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:43.198935  190426 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-775116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:43.586142  190426 pod_ready.go:94] pod "kube-controller-manager-addons-775116" is "Ready"
	I1124 02:40:43.586170  190426 pod_ready.go:86] duration metric: took 387.209027ms for pod "kube-controller-manager-addons-775116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:43.777132  190426 pod_ready.go:83] waiting for pod "kube-proxy-tchwq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:44.175737  190426 pod_ready.go:94] pod "kube-proxy-tchwq" is "Ready"
	I1124 02:40:44.175765  190426 pod_ready.go:86] duration metric: took 398.606618ms for pod "kube-proxy-tchwq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:44.375775  190426 pod_ready.go:83] waiting for pod "kube-scheduler-addons-775116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:44.776474  190426 pod_ready.go:94] pod "kube-scheduler-addons-775116" is "Ready"
	I1124 02:40:44.776501  190426 pod_ready.go:86] duration metric: took 400.700001ms for pod "kube-scheduler-addons-775116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:40:44.776515  190426 pod_ready.go:40] duration metric: took 1.604798701s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 02:40:44.821848  190426 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 02:40:44.824246  190426 out.go:179] * Done! kubectl is now configured to use "addons-775116" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 02:43:53 addons-775116 crio[811]: time="2025-11-24 02:43:53.993172424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20549cf4-e867-46b9-9a2a-39c02b22d580 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:53 addons-775116 crio[811]: time="2025-11-24 02:43:53.993437748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20549cf4-e867-46b9-9a2a-39c02b22d580 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:53 addons-775116 crio[811]: time="2025-11-24 02:43:53.994194205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5545a8489a94c6304f0ab016a419587baab11f09fc4656a3e253460a59bc1d0d,PodSandboxId:6a544602be6f67919b1cba9f049677d9994f6a9b251530ae5a8372aa7094a35e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763952090267839168,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68f27977-c2e9-4ef1-9e72-5688758a7fd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453de60e000d723f093391e5151e038d4b9645f44c8745787bb671e9741a26c8,PodSandboxId:b1633ec2b0e85b19131f6af5cba32875f441d4a9d94ad4641ec5a60085caa6f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763952049126082171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cf30ca55-b9ac-463f-85eb-2b8d09b207a3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3444d2b913ffd0b99172eecdb3b64894f4221a7aa8d06c314cbff3766ba6ce71,PodSandboxId:f201e0ee13ac375ccdd7606037251e3419ef845e883ca348796f519f0cb16e71,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763952038419505333,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-l8kfr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93f5dcf2-c2fd-4d18-b62a-74c19d73c35b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:478b81ff07c3de547e1695798df93280f99fe23a32b487b6267540e9d5453daf,PodSandboxId:1226a110b65350188c7535c573aaded49fd5feeeedadfb7ec4818732050236aa,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763952021026503647,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp68l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: acd1300e-3895-44f9-bd25-990a91bc9e60,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59abadbc268f89f6ee43f1f9369f9117b9b11e5c00ba528c972586bbd1d4e7f6,PodSandboxId:538ebc897fda3fec2bd1f637b10921397b751c7f54a0f03b4b0a25b54a0a1e0d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763952017548753508,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6krct,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f289671-5a8b-4709-9248-850ce4026202,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961479e2f20b79d8438d1eee0e233c5ffdd77e34964bd785d08157b62efb94ec,PodSandboxId:863fe3c234b0a27a1cf1ff63fccc59379cf9a7845f9d1a94e9f7d577ed7ee4fd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763952002318465622,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-wx46f,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a15c2cc-ac93-489a-a5b4-587809a6c7c4,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b120024be2063321b8dd5a57b463fe6a98e3145329d60d69fb90095168e745,PodSandboxId:307a4331f24b0a6a6d7204e133d571ff92b42ec65321fc5d2aa3ca81de59987d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763951991399729290,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524486a5-e1af-4d71-a006-7fba106a887f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce99a3167bc835c18eb967965782de679ae1debc5ab2a16d000b4fbcba58025d,PodSandboxId:1705a6e46aa5ee3647c147a50767bea4c2129ee66b3f81f7e0a1bacf6c360bac,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763951967797399831,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z2tw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a82507-05b4-487d-94bb-6af479fcb8b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c83bf2fc6a39d554e21e04c55f2617e0faa331eb4124c0de9ba57674c84bb22,PodSandboxId:9bbf1f1e81591690dd0c3c03890b52130b
cdf8266eba91c21f15b878eec1c684,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763951966426227414,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee14bfe7-27ce-45c4-b62b-7e46a12726ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fa790c4b2fcf389c29ba39759a86d3d0274533450f1bc7ecf0a1c861e48fdd,PodSandboxId:cdcb35d1722cf273c5dace895721bfea5bbe6ae2ebbfac
00a039875bb2a0aa6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763951960274852589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qz7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a129b9-37bd-4d5d-86b0-5b27df489ff9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849f33d66481cc26e4126e332a05155c87bcada74d63aef60dbdabf525191155,PodSandboxId:6067e037337155479c9c5f69c5224d9cda2164c554ede7ab1765836387546dda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763951958415872974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tchwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08987ca8-5300-448b-9cbd-e202a3e4efa7,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66d34be21d78b639559feed26d01cecadeb1e98ace08feb88c62f66503da8ff,PodSandboxId:f3e9f27dd22750666214876004975df8edda06ea62a8a74fa7f8139f98224d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763951946646064644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58956e974452ecb51e89500e2dcdbb98,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d488abb14c037d8930bec0d82a4d3592e9ea59951554eabdc4483e5ea5ab2f,PodSandboxId:ba4a76354f0771aa95089678c3d267b1db5de3f99df2fa1d57c8b2f976dd3827,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763951946640886073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f7b4e7e5b19513a791c8ae3cad93
e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24849245f24b64a653900e24d6d1ef3f7e3bfe7328aea9dd027494f82f1e7eb,PodSandboxId:d50bf88eb38b6d1bcd0105df0c195775dcc341462f7b43647658058adba7d7a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763951946620914502,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-775116,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 384ced268f38816576368a160deba535,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8fc7d3819ed5c339b62259bc67753cee24982db0369be0b38a6ca8f966bf04,PodSandboxId:da98fb4787b0bb0f2de2ffc7419e7646e38d0a52136eb99652528d0bd427d2a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763951946587345508,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3080f702a12a82962ed3da2e153b5f5e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20549cf4-e867-46b9-9a2a-39c02b22d580 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.029335103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6dad92f7-885f-4b6c-8eb9-2b15283e28b0 name=/runtime.v1.RuntimeService/Version
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.029426676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6dad92f7-885f-4b6c-8eb9-2b15283e28b0 name=/runtime.v1.RuntimeService/Version
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.031118989Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e742b294-c028-4a02-87cb-4498b25578a9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.033388600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763952234033358524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e742b294-c028-4a02-87cb-4498b25578a9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.034371967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5f5b8a6-b9e0-4650-9202-9fcde4ce612a name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.034433249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5f5b8a6-b9e0-4650-9202-9fcde4ce612a name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.034747392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5545a8489a94c6304f0ab016a419587baab11f09fc4656a3e253460a59bc1d0d,PodSandboxId:6a544602be6f67919b1cba9f049677d9994f6a9b251530ae5a8372aa7094a35e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763952090267839168,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68f27977-c2e9-4ef1-9e72-5688758a7fd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453de60e000d723f093391e5151e038d4b9645f44c8745787bb671e9741a26c8,PodSandboxId:b1633ec2b0e85b19131f6af5cba32875f441d4a9d94ad4641ec5a60085caa6f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763952049126082171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cf30ca55-b9ac-463f-85eb-2b8d09b207a3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3444d2b913ffd0b99172eecdb3b64894f4221a7aa8d06c314cbff3766ba6ce71,PodSandboxId:f201e0ee13ac375ccdd7606037251e3419ef845e883ca348796f519f0cb16e71,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763952038419505333,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-l8kfr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93f5dcf2-c2fd-4d18-b62a-74c19d73c35b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:478b81ff07c3de547e1695798df93280f99fe23a32b487b6267540e9d5453daf,PodSandboxId:1226a110b65350188c7535c573aaded49fd5feeeedadfb7ec4818732050236aa,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763952021026503647,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp68l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: acd1300e-3895-44f9-bd25-990a91bc9e60,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59abadbc268f89f6ee43f1f9369f9117b9b11e5c00ba528c972586bbd1d4e7f6,PodSandboxId:538ebc897fda3fec2bd1f637b10921397b751c7f54a0f03b4b0a25b54a0a1e0d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763952017548753508,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6krct,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f289671-5a8b-4709-9248-850ce4026202,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961479e2f20b79d8438d1eee0e233c5ffdd77e34964bd785d08157b62efb94ec,PodSandboxId:863fe3c234b0a27a1cf1ff63fccc59379cf9a7845f9d1a94e9f7d577ed7ee4fd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763952002318465622,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-wx46f,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a15c2cc-ac93-489a-a5b4-587809a6c7c4,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b120024be2063321b8dd5a57b463fe6a98e3145329d60d69fb90095168e745,PodSandboxId:307a4331f24b0a6a6d7204e133d571ff92b42ec65321fc5d2aa3ca81de59987d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763951991399729290,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524486a5-e1af-4d71-a006-7fba106a887f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce99a3167bc835c18eb967965782de679ae1debc5ab2a16d000b4fbcba58025d,PodSandboxId:1705a6e46aa5ee3647c147a50767bea4c2129ee66b3f81f7e0a1bacf6c360bac,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763951967797399831,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z2tw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a82507-05b4-487d-94bb-6af479fcb8b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c83bf2fc6a39d554e21e04c55f2617e0faa331eb4124c0de9ba57674c84bb22,PodSandboxId:9bbf1f1e81591690dd0c3c03890b52130b
cdf8266eba91c21f15b878eec1c684,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763951966426227414,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee14bfe7-27ce-45c4-b62b-7e46a12726ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fa790c4b2fcf389c29ba39759a86d3d0274533450f1bc7ecf0a1c861e48fdd,PodSandboxId:cdcb35d1722cf273c5dace895721bfea5bbe6ae2ebbfac
00a039875bb2a0aa6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763951960274852589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qz7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a129b9-37bd-4d5d-86b0-5b27df489ff9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849f33d66481cc26e4126e332a05155c87bcada74d63aef60dbdabf525191155,PodSandboxId:6067e037337155479c9c5f69c5224d9cda2164c554ede7ab1765836387546dda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763951958415872974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tchwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08987ca8-5300-448b-9cbd-e202a3e4efa7,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66d34be21d78b639559feed26d01cecadeb1e98ace08feb88c62f66503da8ff,PodSandboxId:f3e9f27dd22750666214876004975df8edda06ea62a8a74fa7f8139f98224d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763951946646064644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58956e974452ecb51e89500e2dcdbb98,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d488abb14c037d8930bec0d82a4d3592e9ea59951554eabdc4483e5ea5ab2f,PodSandboxId:ba4a76354f0771aa95089678c3d267b1db5de3f99df2fa1d57c8b2f976dd3827,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763951946640886073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f7b4e7e5b19513a791c8ae3cad93
e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24849245f24b64a653900e24d6d1ef3f7e3bfe7328aea9dd027494f82f1e7eb,PodSandboxId:d50bf88eb38b6d1bcd0105df0c195775dcc341462f7b43647658058adba7d7a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763951946620914502,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-775116,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 384ced268f38816576368a160deba535,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8fc7d3819ed5c339b62259bc67753cee24982db0369be0b38a6ca8f966bf04,PodSandboxId:da98fb4787b0bb0f2de2ffc7419e7646e38d0a52136eb99652528d0bd427d2a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763951946587345508,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3080f702a12a82962ed3da2e153b5f5e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5f5b8a6-b9e0-4650-9202-9fcde4ce612a name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.064275598Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.065671692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76282e82-ffdf-4784-9426-aa36334bd0f4 name=/runtime.v1.RuntimeService/Version
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.065759848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76282e82-ffdf-4784-9426-aa36334bd0f4 name=/runtime.v1.RuntimeService/Version
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.066871363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2cd1509-0502-4e31-9b03-37c6fafbbf07 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.068165524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763952234068138977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2cd1509-0502-4e31-9b03-37c6fafbbf07 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.069274240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3a8c5d1-7ff0-4a84-a22a-a5db7d090be1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.069344073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3a8c5d1-7ff0-4a84-a22a-a5db7d090be1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.069649419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5545a8489a94c6304f0ab016a419587baab11f09fc4656a3e253460a59bc1d0d,PodSandboxId:6a544602be6f67919b1cba9f049677d9994f6a9b251530ae5a8372aa7094a35e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763952090267839168,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68f27977-c2e9-4ef1-9e72-5688758a7fd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453de60e000d723f093391e5151e038d4b9645f44c8745787bb671e9741a26c8,PodSandboxId:b1633ec2b0e85b19131f6af5cba32875f441d4a9d94ad4641ec5a60085caa6f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763952049126082171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cf30ca55-b9ac-463f-85eb-2b8d09b207a3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3444d2b913ffd0b99172eecdb3b64894f4221a7aa8d06c314cbff3766ba6ce71,PodSandboxId:f201e0ee13ac375ccdd7606037251e3419ef845e883ca348796f519f0cb16e71,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763952038419505333,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-l8kfr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93f5dcf2-c2fd-4d18-b62a-74c19d73c35b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:478b81ff07c3de547e1695798df93280f99fe23a32b487b6267540e9d5453daf,PodSandboxId:1226a110b65350188c7535c573aaded49fd5feeeedadfb7ec4818732050236aa,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763952021026503647,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp68l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: acd1300e-3895-44f9-bd25-990a91bc9e60,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59abadbc268f89f6ee43f1f9369f9117b9b11e5c00ba528c972586bbd1d4e7f6,PodSandboxId:538ebc897fda3fec2bd1f637b10921397b751c7f54a0f03b4b0a25b54a0a1e0d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763952017548753508,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6krct,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f289671-5a8b-4709-9248-850ce4026202,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961479e2f20b79d8438d1eee0e233c5ffdd77e34964bd785d08157b62efb94ec,PodSandboxId:863fe3c234b0a27a1cf1ff63fccc59379cf9a7845f9d1a94e9f7d577ed7ee4fd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763952002318465622,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-wx46f,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a15c2cc-ac93-489a-a5b4-587809a6c7c4,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b120024be2063321b8dd5a57b463fe6a98e3145329d60d69fb90095168e745,PodSandboxId:307a4331f24b0a6a6d7204e133d571ff92b42ec65321fc5d2aa3ca81de59987d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763951991399729290,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524486a5-e1af-4d71-a006-7fba106a887f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce99a3167bc835c18eb967965782de679ae1debc5ab2a16d000b4fbcba58025d,PodSandboxId:1705a6e46aa5ee3647c147a50767bea4c2129ee66b3f81f7e0a1bacf6c360bac,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763951967797399831,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z2tw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a82507-05b4-487d-94bb-6af479fcb8b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c83bf2fc6a39d554e21e04c55f2617e0faa331eb4124c0de9ba57674c84bb22,PodSandboxId:9bbf1f1e81591690dd0c3c03890b52130b
cdf8266eba91c21f15b878eec1c684,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763951966426227414,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee14bfe7-27ce-45c4-b62b-7e46a12726ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fa790c4b2fcf389c29ba39759a86d3d0274533450f1bc7ecf0a1c861e48fdd,PodSandboxId:cdcb35d1722cf273c5dace895721bfea5bbe6ae2ebbfac
00a039875bb2a0aa6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763951960274852589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qz7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a129b9-37bd-4d5d-86b0-5b27df489ff9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849f33d66481cc26e4126e332a05155c87bcada74d63aef60dbdabf525191155,PodSandboxId:6067e037337155479c9c5f69c5224d9cda2164c554ede7ab1765836387546dda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763951958415872974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tchwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08987ca8-5300-448b-9cbd-e202a3e4efa7,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66d34be21d78b639559feed26d01cecadeb1e98ace08feb88c62f66503da8ff,PodSandboxId:f3e9f27dd22750666214876004975df8edda06ea62a8a74fa7f8139f98224d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763951946646064644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58956e974452ecb51e89500e2dcdbb98,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d488abb14c037d8930bec0d82a4d3592e9ea59951554eabdc4483e5ea5ab2f,PodSandboxId:ba4a76354f0771aa95089678c3d267b1db5de3f99df2fa1d57c8b2f976dd3827,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763951946640886073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f7b4e7e5b19513a791c8ae3cad93
e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24849245f24b64a653900e24d6d1ef3f7e3bfe7328aea9dd027494f82f1e7eb,PodSandboxId:d50bf88eb38b6d1bcd0105df0c195775dcc341462f7b43647658058adba7d7a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763951946620914502,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-775116,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 384ced268f38816576368a160deba535,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8fc7d3819ed5c339b62259bc67753cee24982db0369be0b38a6ca8f966bf04,PodSandboxId:da98fb4787b0bb0f2de2ffc7419e7646e38d0a52136eb99652528d0bd427d2a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763951946587345508,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3080f702a12a82962ed3da2e153b5f5e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3a8c5d1-7ff0-4a84-a22a-a5db7d090be1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.098845402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5aef0e2-dc4c-437e-987d-07a9cf07408c name=/runtime.v1.RuntimeService/Version
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.099000581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5aef0e2-dc4c-437e-987d-07a9cf07408c name=/runtime.v1.RuntimeService/Version
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.100160732Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=861939e1-7b1c-4412-ab4b-f5d2f91ee199 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.101407672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763952234101378635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=861939e1-7b1c-4412-ab4b-f5d2f91ee199 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.102465387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7fd0019-345e-4a2b-a814-da21039c1390 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.102524015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7fd0019-345e-4a2b-a814-da21039c1390 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 02:43:54 addons-775116 crio[811]: time="2025-11-24 02:43:54.102851410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5545a8489a94c6304f0ab016a419587baab11f09fc4656a3e253460a59bc1d0d,PodSandboxId:6a544602be6f67919b1cba9f049677d9994f6a9b251530ae5a8372aa7094a35e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763952090267839168,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68f27977-c2e9-4ef1-9e72-5688758a7fd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453de60e000d723f093391e5151e038d4b9645f44c8745787bb671e9741a26c8,PodSandboxId:b1633ec2b0e85b19131f6af5cba32875f441d4a9d94ad4641ec5a60085caa6f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763952049126082171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cf30ca55-b9ac-463f-85eb-2b8d09b207a3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3444d2b913ffd0b99172eecdb3b64894f4221a7aa8d06c314cbff3766ba6ce71,PodSandboxId:f201e0ee13ac375ccdd7606037251e3419ef845e883ca348796f519f0cb16e71,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763952038419505333,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-l8kfr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93f5dcf2-c2fd-4d18-b62a-74c19d73c35b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:478b81ff07c3de547e1695798df93280f99fe23a32b487b6267540e9d5453daf,PodSandboxId:1226a110b65350188c7535c573aaded49fd5feeeedadfb7ec4818732050236aa,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763952021026503647,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp68l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: acd1300e-3895-44f9-bd25-990a91bc9e60,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59abadbc268f89f6ee43f1f9369f9117b9b11e5c00ba528c972586bbd1d4e7f6,PodSandboxId:538ebc897fda3fec2bd1f637b10921397b751c7f54a0f03b4b0a25b54a0a1e0d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763952017548753508,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6krct,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f289671-5a8b-4709-9248-850ce4026202,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961479e2f20b79d8438d1eee0e233c5ffdd77e34964bd785d08157b62efb94ec,PodSandboxId:863fe3c234b0a27a1cf1ff63fccc59379cf9a7845f9d1a94e9f7d577ed7ee4fd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763952002318465622,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-wx46f,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a15c2cc-ac93-489a-a5b4-587809a6c7c4,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b120024be2063321b8dd5a57b463fe6a98e3145329d60d69fb90095168e745,PodSandboxId:307a4331f24b0a6a6d7204e133d571ff92b42ec65321fc5d2aa3ca81de59987d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763951991399729290,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524486a5-e1af-4d71-a006-7fba106a887f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce99a3167bc835c18eb967965782de679ae1debc5ab2a16d000b4fbcba58025d,PodSandboxId:1705a6e46aa5ee3647c147a50767bea4c2129ee66b3f81f7e0a1bacf6c360bac,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763951967797399831,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z2tw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a82507-05b4-487d-94bb-6af479fcb8b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c83bf2fc6a39d554e21e04c55f2617e0faa331eb4124c0de9ba57674c84bb22,PodSandboxId:9bbf1f1e81591690dd0c3c03890b52130b
cdf8266eba91c21f15b878eec1c684,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763951966426227414,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee14bfe7-27ce-45c4-b62b-7e46a12726ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fa790c4b2fcf389c29ba39759a86d3d0274533450f1bc7ecf0a1c861e48fdd,PodSandboxId:cdcb35d1722cf273c5dace895721bfea5bbe6ae2ebbfac
00a039875bb2a0aa6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763951960274852589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qz7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a129b9-37bd-4d5d-86b0-5b27df489ff9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849f33d66481cc26e4126e332a05155c87bcada74d63aef60dbdabf525191155,PodSandboxId:6067e037337155479c9c5f69c5224d9cda2164c554ede7ab1765836387546dda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763951958415872974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tchwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08987ca8-5300-448b-9cbd-e202a3e4efa7,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66d34be21d78b639559feed26d01cecadeb1e98ace08feb88c62f66503da8ff,PodSandboxId:f3e9f27dd22750666214876004975df8edda06ea62a8a74fa7f8139f98224d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763951946646064644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58956e974452ecb51e89500e2dcdbb98,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d488abb14c037d8930bec0d82a4d3592e9ea59951554eabdc4483e5ea5ab2f,PodSandboxId:ba4a76354f0771aa95089678c3d267b1db5de3f99df2fa1d57c8b2f976dd3827,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763951946640886073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f7b4e7e5b19513a791c8ae3cad93
e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24849245f24b64a653900e24d6d1ef3f7e3bfe7328aea9dd027494f82f1e7eb,PodSandboxId:d50bf88eb38b6d1bcd0105df0c195775dcc341462f7b43647658058adba7d7a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763951946620914502,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-775116,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 384ced268f38816576368a160deba535,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8fc7d3819ed5c339b62259bc67753cee24982db0369be0b38a6ca8f966bf04,PodSandboxId:da98fb4787b0bb0f2de2ffc7419e7646e38d0a52136eb99652528d0bd427d2a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763951946587345508,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-775116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3080f702a12a82962ed3da2e153b5f5e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7fd0019-345e-4a2b-a814-da21039c1390 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	5545a8489a94c       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   6a544602be6f6       nginx                                      default
	453de60e000d7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   b1633ec2b0e85       busybox                                    default
	3444d2b913ffd       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   f201e0ee13ac3       ingress-nginx-controller-6c8bf45fb-l8kfr   ingress-nginx
	478b81ff07c3d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              patch                     0                   1226a110b6535       ingress-nginx-admission-patch-mp68l        ingress-nginx
	59abadbc268f8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   538ebc897fda3       ingress-nginx-admission-create-6krct       ingress-nginx
	961479e2f20b7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   863fe3c234b0a       local-path-provisioner-648f6765c9-wx46f    local-path-storage
	71b120024be20       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   307a4331f24b0       kube-ingress-dns-minikube                  kube-system
	ce99a3167bc83       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   1705a6e46aa5e       amd-gpu-device-plugin-z2tw8                kube-system
	2c83bf2fc6a39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   9bbf1f1e81591       storage-provisioner                        kube-system
	c0fa790c4b2fc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   cdcb35d1722cf       coredns-66bc5c9577-qz7nr                   kube-system
	849f33d66481c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago       Running             kube-proxy                0                   6067e03733715       kube-proxy-tchwq                           kube-system
	c66d34be21d78       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago       Running             kube-scheduler            0                   f3e9f27dd2275       kube-scheduler-addons-775116               kube-system
	57d488abb14c0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago       Running             kube-controller-manager   0                   ba4a76354f077       kube-controller-manager-addons-775116      kube-system
	e24849245f24b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago       Running             etcd                      0                   d50bf88eb38b6       etcd-addons-775116                         kube-system
	1f8fc7d3819ed       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago       Running             kube-apiserver            0                   da98fb4787b0b       kube-apiserver-addons-775116               kube-system
	
	
	==> coredns [c0fa790c4b2fcf389c29ba39759a86d3d0274533450f1bc7ecf0a1c861e48fdd] <==
	[INFO] 10.244.0.8:48197 - 20284 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000168903s
	[INFO] 10.244.0.8:48197 - 41725 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000258629s
	[INFO] 10.244.0.8:48197 - 60095 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000102487s
	[INFO] 10.244.0.8:48197 - 46340 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000085088s
	[INFO] 10.244.0.8:48197 - 16118 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000150986s
	[INFO] 10.244.0.8:48197 - 46096 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000082807s
	[INFO] 10.244.0.8:48197 - 18098 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000092893s
	[INFO] 10.244.0.8:34049 - 65279 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00018437s
	[INFO] 10.244.0.8:34049 - 40 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000063472s
	[INFO] 10.244.0.8:51189 - 54041 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114164s
	[INFO] 10.244.0.8:51189 - 54527 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064875s
	[INFO] 10.244.0.8:49928 - 63059 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112777s
	[INFO] 10.244.0.8:49928 - 63281 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102765s
	[INFO] 10.244.0.8:49518 - 36462 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112514s
	[INFO] 10.244.0.8:49518 - 36659 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071569s
	[INFO] 10.244.0.23:59233 - 6811 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000445918s
	[INFO] 10.244.0.23:38858 - 29737 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000435265s
	[INFO] 10.244.0.23:57085 - 19202 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000092978s
	[INFO] 10.244.0.23:50612 - 43807 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000261307s
	[INFO] 10.244.0.23:52663 - 53412 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094639s
	[INFO] 10.244.0.23:40002 - 30346 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00032869s
	[INFO] 10.244.0.23:56525 - 32529 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004901336s
	[INFO] 10.244.0.23:40287 - 60645 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.004941086s
	[INFO] 10.244.0.27:54841 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000429343s
	[INFO] 10.244.0.27:38817 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000166034s
	
	
	==> describe nodes <==
	Name:               addons-775116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-775116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=addons-775116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T02_39_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-775116
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 02:39:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-775116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 02:43:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 02:41:45 +0000   Mon, 24 Nov 2025 02:39:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 02:41:45 +0000   Mon, 24 Nov 2025 02:39:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 02:41:45 +0000   Mon, 24 Nov 2025 02:39:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 02:41:45 +0000   Mon, 24 Nov 2025 02:39:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    addons-775116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 23af452466cc48f5bd5766cdcf8ba09a
	  System UUID:                23af4524-66cc-48f5-bd57-66cdcf8ba09a
	  Boot ID:                    10845f70-e940-436f-88b6-3cde954a7b3b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     hello-world-app-5d498dc89-rrt7m             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-l8kfr    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m29s
	  kube-system                 amd-gpu-device-plugin-z2tw8                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 coredns-66bc5c9577-qz7nr                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m37s
	  kube-system                 etcd-addons-775116                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m42s
	  kube-system                 kube-apiserver-addons-775116                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-controller-manager-addons-775116       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-proxy-tchwq                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-scheduler-addons-775116                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  local-path-storage          local-path-provisioner-648f6765c9-wx46f     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m35s  kube-proxy       
	  Normal  Starting                 4m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m42s  kubelet          Node addons-775116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s  kubelet          Node addons-775116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s  kubelet          Node addons-775116 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m41s  kubelet          Node addons-775116 status is now: NodeReady
	  Normal  RegisteredNode           4m38s  node-controller  Node addons-775116 event: Registered Node addons-775116 in Controller
	
	
	==> dmesg <==
	[  +0.062585] kauditd_printk_skb: 357 callbacks suppressed
	[  +0.630932] kauditd_printk_skb: 413 callbacks suppressed
	[  +4.123816] kauditd_printk_skb: 239 callbacks suppressed
	[  +6.393880] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.765977] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.951710] kauditd_printk_skb: 26 callbacks suppressed
	[Nov24 02:40] kauditd_printk_skb: 35 callbacks suppressed
	[  +3.957297] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.812935] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.984144] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.986769] kauditd_printk_skb: 153 callbacks suppressed
	[  +0.000053] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.169227] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000074] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.941686] kauditd_printk_skb: 47 callbacks suppressed
	[Nov24 02:41] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.997286] kauditd_printk_skb: 59 callbacks suppressed
	[  +1.417766] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.889970] kauditd_printk_skb: 182 callbacks suppressed
	[  +4.245836] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.835343] kauditd_printk_skb: 98 callbacks suppressed
	[  +7.331546] kauditd_printk_skb: 5 callbacks suppressed
	[Nov24 02:42] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.857080] kauditd_printk_skb: 41 callbacks suppressed
	[Nov24 02:43] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [e24849245f24b64a653900e24d6d1ef3f7e3bfe7328aea9dd027494f82f1e7eb] <==
	{"level":"info","ts":"2025-11-24T02:40:20.828172Z","caller":"traceutil/trace.go:172","msg":"trace[1149055997] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1008; }","duration":"111.597002ms","start":"2025-11-24T02:40:20.716555Z","end":"2025-11-24T02:40:20.828152Z","steps":["trace[1149055997] 'agreement among raft nodes before linearized reading'  (duration: 111.401031ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:40:20.830777Z","caller":"traceutil/trace.go:172","msg":"trace[1353098340] transaction","detail":"{read_only:false; response_revision:1008; number_of_response:1; }","duration":"157.883753ms","start":"2025-11-24T02:40:20.669624Z","end":"2025-11-24T02:40:20.827508Z","steps":["trace[1353098340] 'process raft request'  (duration: 156.073037ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T02:40:26.602544Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.820062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T02:40:26.602612Z","caller":"traceutil/trace.go:172","msg":"trace[24363486] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"254.899449ms","start":"2025-11-24T02:40:26.347700Z","end":"2025-11-24T02:40:26.602600Z","steps":["trace[24363486] 'range keys from in-memory index tree'  (duration: 254.768039ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T02:40:35.189660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.716464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T02:40:35.190483Z","caller":"traceutil/trace.go:172","msg":"trace[188956077] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1116; }","duration":"101.463087ms","start":"2025-11-24T02:40:35.088907Z","end":"2025-11-24T02:40:35.190370Z","steps":["trace[188956077] 'range keys from in-memory index tree'  (duration: 100.641775ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:40:36.126484Z","caller":"traceutil/trace.go:172","msg":"trace[1843766103] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"122.883171ms","start":"2025-11-24T02:40:36.003588Z","end":"2025-11-24T02:40:36.126471Z","steps":["trace[1843766103] 'process raft request'  (duration: 122.792457ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:40:43.580677Z","caller":"traceutil/trace.go:172","msg":"trace[150646931] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"194.14978ms","start":"2025-11-24T02:40:43.386516Z","end":"2025-11-24T02:40:43.580666Z","steps":["trace[150646931] 'process raft request'  (duration: 194.066635ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T02:41:24.072210Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"248.972309ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T02:41:24.073098Z","caller":"traceutil/trace.go:172","msg":"trace[1085560836] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1451; }","duration":"249.98139ms","start":"2025-11-24T02:41:23.823103Z","end":"2025-11-24T02:41:24.073084Z","steps":["trace[1085560836] 'range keys from in-memory index tree'  (duration: 248.957446ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T02:41:24.072504Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.402663ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6455798687580589118 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/registry-proxy-tgmzc.187ad1130770246a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/registry-proxy-tgmzc.187ad1130770246a\" value_size:651 lease:6455798687580588926 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T02:41:24.073166Z","caller":"traceutil/trace.go:172","msg":"trace[835040785] transaction","detail":"{read_only:false; response_revision:1452; number_of_response:1; }","duration":"319.342556ms","start":"2025-11-24T02:41:23.753815Z","end":"2025-11-24T02:41:24.073157Z","steps":["trace[835040785] 'process raft request'  (duration: 82.257986ms)","trace[835040785] 'compare'  (duration: 236.316915ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T02:41:24.073208Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T02:41:23.753796Z","time spent":"319.385146ms","remote":"127.0.0.1:42126","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":735,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/registry-proxy-tgmzc.187ad1130770246a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/registry-proxy-tgmzc.187ad1130770246a\" value_size:651 lease:6455798687580588926 >> failure:<>"}
	{"level":"info","ts":"2025-11-24T02:41:24.075350Z","caller":"traceutil/trace.go:172","msg":"trace[1122299906] linearizableReadLoop","detail":"{readStateIndex:1497; appliedIndex:1497; }","duration":"195.264644ms","start":"2025-11-24T02:41:23.880074Z","end":"2025-11-24T02:41:24.075339Z","steps":["trace[1122299906] 'read index received'  (duration: 195.260354ms)","trace[1122299906] 'applied index is now lower than readState.Index'  (duration: 3.444µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T02:41:24.075601Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"195.532234ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T02:41:24.075626Z","caller":"traceutil/trace.go:172","msg":"trace[75718275] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1452; }","duration":"195.563736ms","start":"2025-11-24T02:41:23.880054Z","end":"2025-11-24T02:41:24.075617Z","steps":["trace[75718275] 'agreement among raft nodes before linearized reading'  (duration: 195.507577ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:41:24.076566Z","caller":"traceutil/trace.go:172","msg":"trace[202425863] transaction","detail":"{read_only:false; response_revision:1453; number_of_response:1; }","duration":"275.479374ms","start":"2025-11-24T02:41:23.801079Z","end":"2025-11-24T02:41:24.076558Z","steps":["trace[202425863] 'process raft request'  (duration: 275.014138ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:41:35.563994Z","caller":"traceutil/trace.go:172","msg":"trace[1679519525] transaction","detail":"{read_only:false; response_revision:1569; number_of_response:1; }","duration":"213.355691ms","start":"2025-11-24T02:41:35.350624Z","end":"2025-11-24T02:41:35.563980Z","steps":["trace[1679519525] 'process raft request'  (duration: 213.232335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T02:41:36.013640Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.020878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T02:41:36.013692Z","caller":"traceutil/trace.go:172","msg":"trace[1385030647] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1572; }","duration":"176.088753ms","start":"2025-11-24T02:41:35.837593Z","end":"2025-11-24T02:41:36.013682Z","steps":["trace[1385030647] 'range keys from in-memory index tree'  (duration: 175.405498ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T02:41:36.014260Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.761777ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6455798687580589476 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gadget/gadget-pw7cd.187ad101d2dcfcb3\" mod_revision:975 > success:<request_delete_range:<key:\"/registry/events/gadget/gadget-pw7cd.187ad101d2dcfcb3\" > > failure:<request_range:<key:\"/registry/events/gadget/gadget-pw7cd.187ad101d2dcfcb3\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-11-24T02:41:36.014316Z","caller":"traceutil/trace.go:172","msg":"trace[319757245] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1573; }","duration":"234.386274ms","start":"2025-11-24T02:41:35.779923Z","end":"2025-11-24T02:41:36.014309Z","steps":["trace[319757245] 'process raft request'  (duration: 58.324714ms)","trace[319757245] 'compare'  (duration: 175.295646ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T02:41:36.013415Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.400952ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T02:41:36.014733Z","caller":"traceutil/trace.go:172","msg":"trace[1164822762] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1572; }","duration":"191.727307ms","start":"2025-11-24T02:41:35.822995Z","end":"2025-11-24T02:41:36.014722Z","steps":["trace[1164822762] 'range keys from in-memory index tree'  (duration: 190.342994ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:42:07.882127Z","caller":"traceutil/trace.go:172","msg":"trace[775643206] transaction","detail":"{read_only:false; response_revision:1699; number_of_response:1; }","duration":"153.998583ms","start":"2025-11-24T02:42:07.728115Z","end":"2025-11-24T02:42:07.882114Z","steps":["trace[775643206] 'process raft request'  (duration: 153.862187ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:43:54 up 5 min,  0 users,  load average: 0.48, 0.92, 0.48
	Linux addons-775116 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Nov 24 01:33:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1f8fc7d3819ed5c339b62259bc67753cee24982db0369be0b38a6ca8f966bf04] <==
	E1124 02:40:06.384949       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.139.40:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.139.40:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.139.40:443: connect: connection refused" logger="UnhandledError"
	E1124 02:40:06.389141       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.139.40:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.139.40:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.139.40:443: connect: connection refused" logger="UnhandledError"
	E1124 02:40:06.394315       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.139.40:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.139.40:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.139.40:443: connect: connection refused" logger="UnhandledError"
	I1124 02:40:06.485125       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 02:40:56.617506       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:36884: use of closed network connection
	E1124 02:40:56.799306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:36906: use of closed network connection
	I1124 02:41:18.088788       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.94.159"}
	I1124 02:41:25.422231       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1124 02:41:25.658155       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.34.200"}
	I1124 02:41:43.906122       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1124 02:42:07.395106       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1124 02:42:12.720288       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 02:42:12.720343       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 02:42:12.761266       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 02:42:12.761323       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 02:42:12.770245       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 02:42:12.770274       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 02:42:12.787487       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 02:42:12.787592       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 02:42:12.826711       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 02:42:12.828069       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1124 02:42:13.779975       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1124 02:42:13.837511       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1124 02:42:13.839908       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1124 02:43:53.080210       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.251.122"}
	
	
	==> kube-controller-manager [57d488abb14c037d8930bec0d82a4d3592e9ea59951554eabdc4483e5ea5ab2f] <==
	E1124 02:42:17.524097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:42:20.622830       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:42:20.623830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:42:21.533792       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:42:21.534985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:42:22.909101       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:42:22.910219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:42:27.388805       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:42:27.389832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:42:29.940639       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:42:29.941578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:42:33.177472       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:42:33.178623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:42:44.920958       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:42:44.922163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:42:47.682713       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:42:47.684003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:42:51.933104       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:42:51.934230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:43:20.150203       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:43:20.151195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:43:34.630967       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:43:34.631882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 02:43:38.340228       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 02:43:38.341197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [849f33d66481cc26e4126e332a05155c87bcada74d63aef60dbdabf525191155] <==
	I1124 02:39:18.993590       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 02:39:19.095719       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:39:19.095757       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.95"]
	E1124 02:39:19.095829       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:39:19.210929       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1124 02:39:19.210992       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 02:39:19.211060       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:39:19.262540       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:39:19.262813       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:39:19.262839       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:39:19.270007       1 config.go:200] "Starting service config controller"
	I1124 02:39:19.270125       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:39:19.270143       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:39:19.270147       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:39:19.270158       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:39:19.270179       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:39:19.279494       1 config.go:309] "Starting node config controller"
	I1124 02:39:19.279522       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:39:19.279530       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 02:39:19.370617       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 02:39:19.370668       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:39:19.370687       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c66d34be21d78b639559feed26d01cecadeb1e98ace08feb88c62f66503da8ff] <==
	E1124 02:39:09.590877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:39:09.591063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 02:39:09.591249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:39:09.591354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:39:09.591487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:39:09.594123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:39:09.594207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:39:09.594296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:39:09.594362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:39:09.594504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:39:09.594639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:39:09.594809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:39:09.594936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:39:10.382786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:39:10.415897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:39:10.489977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 02:39:10.515140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:39:10.525103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:39:10.563108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 02:39:10.585981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:39:10.587230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:39:10.608623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:39:10.724452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:39:10.791897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1124 02:39:12.870117       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 02:42:15 addons-775116 kubelet[1498]: I1124 02:42:15.933609    1498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28bd1730-4b1f-46e8-beec-8895e6813333" path="/var/lib/kubelet/pods/28bd1730-4b1f-46e8-beec-8895e6813333/volumes"
	Nov 24 02:42:19 addons-775116 kubelet[1498]: I1124 02:42:19.921616    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-z2tw8" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:42:22 addons-775116 kubelet[1498]: E1124 02:42:22.239801    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763952142238293442  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:42:22 addons-775116 kubelet[1498]: E1124 02:42:22.239842    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763952142238293442  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:42:32 addons-775116 kubelet[1498]: E1124 02:42:32.244234    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763952152242693006  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:42:32 addons-775116 kubelet[1498]: E1124 02:42:32.244274    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763952152242693006  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:42:42 addons-775116 kubelet[1498]: E1124 02:42:42.247386    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763952162246825689  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:42:42 addons-775116 kubelet[1498]: E1124 02:42:42.247427    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763952162246825689  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:42:52 addons-775116 kubelet[1498]: E1124 02:42:52.250360    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763952172249614037  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:42:52 addons-775116 kubelet[1498]: E1124 02:42:52.250458    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763952172249614037  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:02 addons-775116 kubelet[1498]: E1124 02:43:02.253545    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763952182252852333  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:02 addons-775116 kubelet[1498]: E1124 02:43:02.253568    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763952182252852333  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:12 addons-775116 kubelet[1498]: E1124 02:43:12.255937    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763952192255606629  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:12 addons-775116 kubelet[1498]: E1124 02:43:12.255974    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763952192255606629  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:22 addons-775116 kubelet[1498]: E1124 02:43:22.261439    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763952202258497241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:22 addons-775116 kubelet[1498]: E1124 02:43:22.261483    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763952202258497241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:28 addons-775116 kubelet[1498]: I1124 02:43:28.921421    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:43:32 addons-775116 kubelet[1498]: E1124 02:43:32.267480    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763952212266558757  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:32 addons-775116 kubelet[1498]: E1124 02:43:32.267775    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763952212266558757  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:42 addons-775116 kubelet[1498]: E1124 02:43:42.272300    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763952222270786833  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:42 addons-775116 kubelet[1498]: E1124 02:43:42.272323    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763952222270786833  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:46 addons-775116 kubelet[1498]: I1124 02:43:46.920932    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-z2tw8" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:43:52 addons-775116 kubelet[1498]: E1124 02:43:52.274987    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763952232274589742  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:52 addons-775116 kubelet[1498]: E1124 02:43:52.275042    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763952232274589742  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 02:43:53 addons-775116 kubelet[1498]: I1124 02:43:53.032314    1498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8xr\" (UniqueName: \"kubernetes.io/projected/30458c01-5b3f-435d-8411-799e600da383-kube-api-access-dq8xr\") pod \"hello-world-app-5d498dc89-rrt7m\" (UID: \"30458c01-5b3f-435d-8411-799e600da383\") " pod="default/hello-world-app-5d498dc89-rrt7m"
	
	
	==> storage-provisioner [2c83bf2fc6a39d554e21e04c55f2617e0faa331eb4124c0de9ba57674c84bb22] <==
	W1124 02:43:30.311339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:32.315710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:32.323478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:34.327465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:34.332855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:36.337726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:36.345918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:38.349734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:38.355236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:40.359203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:40.363532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:42.367283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:42.372057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:44.375643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:44.382524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:46.387990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:46.393211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:48.396698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:48.401264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:50.404572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:50.411053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:52.414769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:52.419563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:54.423605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:43:54.428476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-775116 -n addons-775116
helpers_test.go:269: (dbg) Run:  kubectl --context addons-775116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-rrt7m ingress-nginx-admission-create-6krct ingress-nginx-admission-patch-mp68l
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-775116 describe pod hello-world-app-5d498dc89-rrt7m ingress-nginx-admission-create-6krct ingress-nginx-admission-patch-mp68l
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-775116 describe pod hello-world-app-5d498dc89-rrt7m ingress-nginx-admission-create-6krct ingress-nginx-admission-patch-mp68l: exit status 1 (71.227415ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-rrt7m
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-775116/192.168.39.95
	Start Time:       Mon, 24 Nov 2025 02:43:52 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dq8xr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dq8xr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-rrt7m to addons-775116
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6krct" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mp68l" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-775116 describe pod hello-world-app-5d498dc89-rrt7m ingress-nginx-admission-create-6krct ingress-nginx-admission-patch-mp68l: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775116 addons disable ingress-dns --alsologtostderr -v=1: (1.100483339s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775116 addons disable ingress --alsologtostderr -v=1: (7.728488143s)
--- FAIL: TestAddons/parallel/Ingress (158.82s)

                                                
                                    
x
+
TestPreload (153.6s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-714953 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1124 03:28:22.173735  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-714953 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m27.927274195s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-714953 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-714953 image pull gcr.io/k8s-minikube/busybox: (3.578327592s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-714953
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-714953: (6.809433224s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-714953 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1124 03:30:19.099194  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-714953 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (52.432924418s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-714953 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-11-24 03:30:29.52831445 +0000 UTC m=+3152.376284619
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-714953 -n test-preload-714953
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-714953 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-714953 logs -n 25: (1.012452135s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-615187 ssh -n multinode-615187-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │ 24 Nov 25 03:17 UTC │
	│ ssh     │ multinode-615187 ssh -n multinode-615187 sudo cat /home/docker/cp-test_multinode-615187-m03_multinode-615187.txt                                          │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │ 24 Nov 25 03:17 UTC │
	│ cp      │ multinode-615187 cp multinode-615187-m03:/home/docker/cp-test.txt multinode-615187-m02:/home/docker/cp-test_multinode-615187-m03_multinode-615187-m02.txt │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │ 24 Nov 25 03:17 UTC │
	│ ssh     │ multinode-615187 ssh -n multinode-615187-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │ 24 Nov 25 03:17 UTC │
	│ ssh     │ multinode-615187 ssh -n multinode-615187-m02 sudo cat /home/docker/cp-test_multinode-615187-m03_multinode-615187-m02.txt                                  │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │ 24 Nov 25 03:17 UTC │
	│ node    │ multinode-615187 node stop m03                                                                                                                            │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │ 24 Nov 25 03:17 UTC │
	│ node    │ multinode-615187 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │ 24 Nov 25 03:18 UTC │
	│ node    │ list -p multinode-615187                                                                                                                                  │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:18 UTC │                     │
	│ stop    │ -p multinode-615187                                                                                                                                       │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:18 UTC │ 24 Nov 25 03:21 UTC │
	│ start   │ -p multinode-615187 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:21 UTC │ 24 Nov 25 03:23 UTC │
	│ node    │ list -p multinode-615187                                                                                                                                  │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:23 UTC │                     │
	│ node    │ multinode-615187 node delete m03                                                                                                                          │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:23 UTC │ 24 Nov 25 03:23 UTC │
	│ stop    │ multinode-615187 stop                                                                                                                                     │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:23 UTC │ 24 Nov 25 03:25 UTC │
	│ start   │ -p multinode-615187 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:25 UTC │ 24 Nov 25 03:27 UTC │
	│ node    │ list -p multinode-615187                                                                                                                                  │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:27 UTC │                     │
	│ start   │ -p multinode-615187-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-615187-m02 │ jenkins │ v1.37.0 │ 24 Nov 25 03:27 UTC │                     │
	│ start   │ -p multinode-615187-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-615187-m03 │ jenkins │ v1.37.0 │ 24 Nov 25 03:27 UTC │ 24 Nov 25 03:27 UTC │
	│ node    │ add -p multinode-615187                                                                                                                                   │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:27 UTC │                     │
	│ delete  │ -p multinode-615187-m03                                                                                                                                   │ multinode-615187-m03 │ jenkins │ v1.37.0 │ 24 Nov 25 03:27 UTC │ 24 Nov 25 03:27 UTC │
	│ delete  │ -p multinode-615187                                                                                                                                       │ multinode-615187     │ jenkins │ v1.37.0 │ 24 Nov 25 03:27 UTC │ 24 Nov 25 03:27 UTC │
	│ start   │ -p test-preload-714953 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-714953  │ jenkins │ v1.37.0 │ 24 Nov 25 03:27 UTC │ 24 Nov 25 03:29 UTC │
	│ image   │ test-preload-714953 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-714953  │ jenkins │ v1.37.0 │ 24 Nov 25 03:29 UTC │ 24 Nov 25 03:29 UTC │
	│ stop    │ -p test-preload-714953                                                                                                                                    │ test-preload-714953  │ jenkins │ v1.37.0 │ 24 Nov 25 03:29 UTC │ 24 Nov 25 03:29 UTC │
	│ start   │ -p test-preload-714953 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-714953  │ jenkins │ v1.37.0 │ 24 Nov 25 03:29 UTC │ 24 Nov 25 03:30 UTC │
	│ image   │ test-preload-714953 image list                                                                                                                            │ test-preload-714953  │ jenkins │ v1.37.0 │ 24 Nov 25 03:30 UTC │ 24 Nov 25 03:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:29:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:29:36.956774  212849 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:29:36.957089  212849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:29:36.957101  212849 out.go:374] Setting ErrFile to fd 2...
	I1124 03:29:36.957109  212849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:29:36.957387  212849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:29:36.957948  212849 out.go:368] Setting JSON to false
	I1124 03:29:36.958951  212849 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11517,"bootTime":1763943460,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:29:36.959023  212849 start.go:143] virtualization: kvm guest
	I1124 03:29:36.960994  212849 out.go:179] * [test-preload-714953] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:29:36.962589  212849 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:29:36.962608  212849 notify.go:221] Checking for updates...
	I1124 03:29:36.965087  212849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:29:36.966335  212849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 03:29:36.967631  212849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 03:29:36.968762  212849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:29:36.969911  212849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:29:36.971509  212849 config.go:182] Loaded profile config "test-preload-714953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1124 03:29:36.973174  212849 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1124 03:29:36.974423  212849 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:29:37.009068  212849 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 03:29:37.010222  212849 start.go:309] selected driver: kvm2
	I1124 03:29:37.010243  212849 start.go:927] validating driver "kvm2" against &{Name:test-preload-714953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-714953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:29:37.010347  212849 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:29:37.011261  212849 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:29:37.011306  212849 cni.go:84] Creating CNI manager for ""
	I1124 03:29:37.011394  212849 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:29:37.011464  212849 start.go:353] cluster config:
	{Name:test-preload-714953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-714953 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:29:37.011571  212849 iso.go:125] acquiring lock: {Name:mk63ee8f30093c8c7d0696dd2486a8eb0d8bd024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:29:37.013046  212849 out.go:179] * Starting "test-preload-714953" primary control-plane node in "test-preload-714953" cluster
	I1124 03:29:37.014072  212849 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1124 03:29:37.208759  212849 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1124 03:29:37.208806  212849 cache.go:65] Caching tarball of preloaded images
	I1124 03:29:37.209008  212849 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1124 03:29:37.210757  212849 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1124 03:29:37.211857  212849 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1124 03:29:37.324535  212849 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1124 03:29:37.324586  212849 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1124 03:29:46.908807  212849 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1124 03:29:46.908958  212849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/config.json ...
	I1124 03:29:46.909194  212849 start.go:360] acquireMachinesLock for test-preload-714953: {Name:mk6edb9cd27540c3b670af896ffc377aa954769e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 03:29:46.909265  212849 start.go:364] duration metric: took 46.752µs to acquireMachinesLock for "test-preload-714953"
	I1124 03:29:46.909284  212849 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:29:46.909289  212849 fix.go:54] fixHost starting: 
	I1124 03:29:46.911471  212849 fix.go:112] recreateIfNeeded on test-preload-714953: state=Stopped err=<nil>
	W1124 03:29:46.911503  212849 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:29:46.915254  212849 out.go:252] * Restarting existing kvm2 VM for "test-preload-714953" ...
	I1124 03:29:46.915292  212849 main.go:143] libmachine: starting domain...
	I1124 03:29:46.915302  212849 main.go:143] libmachine: ensuring networks are active...
	I1124 03:29:46.916221  212849 main.go:143] libmachine: Ensuring network default is active
	I1124 03:29:46.916662  212849 main.go:143] libmachine: Ensuring network mk-test-preload-714953 is active
	I1124 03:29:46.917167  212849 main.go:143] libmachine: getting domain XML...
	I1124 03:29:46.918444  212849 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-714953</name>
	  <uuid>a6ed2e2e-0515-43c2-ba89-62c6e5779730</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21975-185833/.minikube/machines/test-preload-714953/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21975-185833/.minikube/machines/test-preload-714953/test-preload-714953.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:17:fa:4c'/>
	      <source network='mk-test-preload-714953'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:1f:65:5b'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1124 03:29:48.156600  212849 main.go:143] libmachine: waiting for domain to start...
	I1124 03:29:48.157965  212849 main.go:143] libmachine: domain is now running
	I1124 03:29:48.157989  212849 main.go:143] libmachine: waiting for IP...
	I1124 03:29:48.158931  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:29:48.159621  212849 main.go:143] libmachine: domain test-preload-714953 has current primary IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:29:48.159638  212849 main.go:143] libmachine: found domain IP: 192.168.39.117
	I1124 03:29:48.159643  212849 main.go:143] libmachine: reserving static IP address...
	I1124 03:29:48.160171  212849 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-714953", mac: "52:54:00:17:fa:4c", ip: "192.168.39.117"} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:28:13 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:29:48.160212  212849 main.go:143] libmachine: skip adding static IP to network mk-test-preload-714953 - found existing host DHCP lease matching {name: "test-preload-714953", mac: "52:54:00:17:fa:4c", ip: "192.168.39.117"}
	I1124 03:29:48.160229  212849 main.go:143] libmachine: reserved static IP address 192.168.39.117 for domain test-preload-714953
	I1124 03:29:48.160235  212849 main.go:143] libmachine: waiting for SSH...
	I1124 03:29:48.160247  212849 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 03:29:48.162398  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:29:48.162729  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:28:13 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:29:48.162753  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:29:48.162890  212849 main.go:143] libmachine: Using SSH client type: native
	I1124 03:29:48.163097  212849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1124 03:29:48.163106  212849 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 03:29:51.239687  212849 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.117:22: connect: no route to host
	I1124 03:29:57.319710  212849 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.117:22: connect: no route to host
	I1124 03:30:00.425861  212849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:30:00.429614  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.430050  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:00.430075  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.430384  212849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/config.json ...
	I1124 03:30:00.430581  212849 machine.go:94] provisionDockerMachine start ...
	I1124 03:30:00.432996  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.433389  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:00.433421  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.433623  212849 main.go:143] libmachine: Using SSH client type: native
	I1124 03:30:00.433917  212849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1124 03:30:00.433932  212849 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:30:00.537783  212849 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 03:30:00.537827  212849 buildroot.go:166] provisioning hostname "test-preload-714953"
	I1124 03:30:00.540386  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.540760  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:00.540787  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.540959  212849 main.go:143] libmachine: Using SSH client type: native
	I1124 03:30:00.541181  212849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1124 03:30:00.541197  212849 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-714953 && echo "test-preload-714953" | sudo tee /etc/hostname
	I1124 03:30:00.662796  212849 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-714953
	
	I1124 03:30:00.666194  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.666808  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:00.666844  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.667087  212849 main.go:143] libmachine: Using SSH client type: native
	I1124 03:30:00.667362  212849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1124 03:30:00.667404  212849 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-714953' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-714953/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-714953' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:30:00.778997  212849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:30:00.779030  212849 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21975-185833/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-185833/.minikube}
	I1124 03:30:00.779055  212849 buildroot.go:174] setting up certificates
	I1124 03:30:00.779064  212849 provision.go:84] configureAuth start
	I1124 03:30:00.782079  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.782523  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:00.782559  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.784781  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.785114  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:00.785136  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.785246  212849 provision.go:143] copyHostCerts
	I1124 03:30:00.785293  212849 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem, removing ...
	I1124 03:30:00.785311  212849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem
	I1124 03:30:00.785405  212849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem (1078 bytes)
	I1124 03:30:00.785520  212849 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem, removing ...
	I1124 03:30:00.785532  212849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem
	I1124 03:30:00.785562  212849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem (1123 bytes)
	I1124 03:30:00.785617  212849 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem, removing ...
	I1124 03:30:00.785624  212849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem
	I1124 03:30:00.785646  212849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem (1675 bytes)
	I1124 03:30:00.785694  212849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem org=jenkins.test-preload-714953 san=[127.0.0.1 192.168.39.117 localhost minikube test-preload-714953]
	I1124 03:30:00.855738  212849 provision.go:177] copyRemoteCerts
	I1124 03:30:00.855800  212849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:30:00.858563  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.858938  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:00.858963  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:00.859092  212849 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/test-preload-714953/id_rsa Username:docker}
	I1124 03:30:00.940116  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:30:00.967322  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 03:30:00.994651  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:30:01.023417  212849 provision.go:87] duration metric: took 244.336176ms to configureAuth
	I1124 03:30:01.023449  212849 buildroot.go:189] setting minikube options for container-runtime
	I1124 03:30:01.023661  212849 config.go:182] Loaded profile config "test-preload-714953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1124 03:30:01.026521  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.026903  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:01.026953  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.027154  212849 main.go:143] libmachine: Using SSH client type: native
	I1124 03:30:01.027456  212849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1124 03:30:01.027478  212849 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:30:01.277092  212849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:30:01.277136  212849 machine.go:97] duration metric: took 846.539295ms to provisionDockerMachine
	I1124 03:30:01.277155  212849 start.go:293] postStartSetup for "test-preload-714953" (driver="kvm2")
	I1124 03:30:01.277171  212849 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:30:01.277256  212849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:30:01.280161  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.280567  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:01.280592  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.280737  212849 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/test-preload-714953/id_rsa Username:docker}
	I1124 03:30:01.363058  212849 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:30:01.367761  212849 info.go:137] Remote host: Buildroot 2025.02
	I1124 03:30:01.367791  212849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/addons for local assets ...
	I1124 03:30:01.367876  212849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/files for local assets ...
	I1124 03:30:01.368010  212849 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem -> 1897492.pem in /etc/ssl/certs
	I1124 03:30:01.368118  212849 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:30:01.378874  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem --> /etc/ssl/certs/1897492.pem (1708 bytes)
	I1124 03:30:01.406859  212849 start.go:296] duration metric: took 129.68661ms for postStartSetup
	I1124 03:30:01.406909  212849 fix.go:56] duration metric: took 14.497618397s for fixHost
	I1124 03:30:01.409615  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.409983  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:01.410020  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.410182  212849 main.go:143] libmachine: Using SSH client type: native
	I1124 03:30:01.410395  212849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1124 03:30:01.410405  212849 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 03:30:01.512604  212849 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763955001.468617531
	
	I1124 03:30:01.512647  212849 fix.go:216] guest clock: 1763955001.468617531
	I1124 03:30:01.512660  212849 fix.go:229] Guest: 2025-11-24 03:30:01.468617531 +0000 UTC Remote: 2025-11-24 03:30:01.406914439 +0000 UTC m=+24.501655036 (delta=61.703092ms)
	I1124 03:30:01.512687  212849 fix.go:200] guest clock delta is within tolerance: 61.703092ms
	I1124 03:30:01.512698  212849 start.go:83] releasing machines lock for "test-preload-714953", held for 14.603420416s
	I1124 03:30:01.516085  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.516523  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:01.516571  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.517189  212849 ssh_runner.go:195] Run: cat /version.json
	I1124 03:30:01.517284  212849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:30:01.520100  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.520291  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.520576  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:01.520600  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.520652  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:01.520676  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:01.520768  212849 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/test-preload-714953/id_rsa Username:docker}
	I1124 03:30:01.520981  212849 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/test-preload-714953/id_rsa Username:docker}
	I1124 03:30:01.604711  212849 ssh_runner.go:195] Run: systemctl --version
	I1124 03:30:01.630575  212849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:30:01.777431  212849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:30:01.783946  212849 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:30:01.784019  212849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:30:01.803720  212849 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:30:01.803748  212849 start.go:496] detecting cgroup driver to use...
	I1124 03:30:01.803808  212849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:30:01.823698  212849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:30:01.841845  212849 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:30:01.841944  212849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:30:01.860297  212849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:30:01.876419  212849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:30:02.017471  212849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:30:02.219151  212849 docker.go:234] disabling docker service ...
	I1124 03:30:02.219229  212849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:30:02.234901  212849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:30:02.249176  212849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:30:02.396211  212849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:30:02.532554  212849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:30:02.548231  212849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:30:02.569655  212849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1124 03:30:02.569727  212849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:30:02.581294  212849 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 03:30:02.581361  212849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:30:02.593360  212849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:30:02.604945  212849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:30:02.616483  212849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:30:02.628445  212849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:30:02.640162  212849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:30:02.659392  212849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:30:02.671055  212849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:30:02.680882  212849 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 03:30:02.680957  212849 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 03:30:02.700208  212849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:30:02.711943  212849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:30:02.848279  212849 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:30:02.954666  212849 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:30:02.954749  212849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:30:02.960059  212849 start.go:564] Will wait 60s for crictl version
	I1124 03:30:02.960142  212849 ssh_runner.go:195] Run: which crictl
	I1124 03:30:02.964302  212849 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 03:30:02.996430  212849 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 03:30:02.996556  212849 ssh_runner.go:195] Run: crio --version
	I1124 03:30:03.024629  212849 ssh_runner.go:195] Run: crio --version
	I1124 03:30:03.054871  212849 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1124 03:30:03.058578  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:03.058960  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:03.058989  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:03.059181  212849 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1124 03:30:03.063698  212849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:30:03.078871  212849 kubeadm.go:884] updating cluster {Name:test-preload-714953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-714953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:30:03.079006  212849 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1124 03:30:03.079049  212849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:30:03.110813  212849 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1124 03:30:03.110887  212849 ssh_runner.go:195] Run: which lz4
	I1124 03:30:03.115024  212849 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 03:30:03.119707  212849 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1124 03:30:03.119734  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1124 03:30:04.480636  212849 crio.go:462] duration metric: took 1.365640331s to copy over tarball
	I1124 03:30:04.480725  212849 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 03:30:06.092013  212849 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.611258192s)
	I1124 03:30:06.092050  212849 crio.go:469] duration metric: took 1.611381182s to extract the tarball
	I1124 03:30:06.092058  212849 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1124 03:30:06.131325  212849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:30:06.167226  212849 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:30:06.167259  212849 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:30:06.167267  212849 kubeadm.go:935] updating node { 192.168.39.117 8443 v1.32.0 crio true true} ...
	I1124 03:30:06.167459  212849 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-714953 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-714953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:30:06.167544  212849 ssh_runner.go:195] Run: crio config
	I1124 03:30:06.212488  212849 cni.go:84] Creating CNI manager for ""
	I1124 03:30:06.212524  212849 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:30:06.212546  212849 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:30:06.212568  212849 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.117 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-714953 NodeName:test-preload-714953 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:30:06.212690  212849 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-714953"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.117"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.117"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:30:06.212763  212849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1124 03:30:06.224187  212849 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:30:06.224288  212849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:30:06.234958  212849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1124 03:30:06.253421  212849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:30:06.272272  212849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1124 03:30:06.292261  212849 ssh_runner.go:195] Run: grep 192.168.39.117	control-plane.minikube.internal$ /etc/hosts
	I1124 03:30:06.295996  212849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.117	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:30:06.309715  212849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:30:06.446490  212849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:30:06.491176  212849 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953 for IP: 192.168.39.117
	I1124 03:30:06.491212  212849 certs.go:195] generating shared ca certs ...
	I1124 03:30:06.491239  212849 certs.go:227] acquiring lock for ca certs: {Name:mk173959192d8348177ca5710cbe68cc42fae47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:30:06.491478  212849 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key
	I1124 03:30:06.491572  212849 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key
	I1124 03:30:06.491591  212849 certs.go:257] generating profile certs ...
	I1124 03:30:06.491713  212849 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/client.key
	I1124 03:30:06.491801  212849 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/apiserver.key.84a7e759
	I1124 03:30:06.491862  212849 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/proxy-client.key
	I1124 03:30:06.492048  212849 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749.pem (1338 bytes)
	W1124 03:30:06.492100  212849 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749_empty.pem, impossibly tiny 0 bytes
	I1124 03:30:06.492114  212849 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 03:30:06.492152  212849 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:30:06.492195  212849 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:30:06.492250  212849 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem (1675 bytes)
	I1124 03:30:06.492323  212849 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem (1708 bytes)
	I1124 03:30:06.493172  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:30:06.529572  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 03:30:06.562095  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:30:06.588855  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:30:06.615720  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:30:06.643112  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:30:06.669855  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:30:06.695926  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:30:06.722228  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem --> /usr/share/ca-certificates/1897492.pem (1708 bytes)
	I1124 03:30:06.748433  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:30:06.774651  212849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749.pem --> /usr/share/ca-certificates/189749.pem (1338 bytes)
	I1124 03:30:06.800248  212849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:30:06.818820  212849 ssh_runner.go:195] Run: openssl version
	I1124 03:30:06.824631  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:30:06.836136  212849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:30:06.840726  212849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:39 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:30:06.840776  212849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:30:06.847352  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:30:06.859002  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/189749.pem && ln -fs /usr/share/ca-certificates/189749.pem /etc/ssl/certs/189749.pem"
	I1124 03:30:06.870288  212849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/189749.pem
	I1124 03:30:06.874983  212849 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:47 /usr/share/ca-certificates/189749.pem
	I1124 03:30:06.875033  212849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/189749.pem
	I1124 03:30:06.881362  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/189749.pem /etc/ssl/certs/51391683.0"
	I1124 03:30:06.892984  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1897492.pem && ln -fs /usr/share/ca-certificates/1897492.pem /etc/ssl/certs/1897492.pem"
	I1124 03:30:06.904552  212849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1897492.pem
	I1124 03:30:06.909132  212849 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:47 /usr/share/ca-certificates/1897492.pem
	I1124 03:30:06.909174  212849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1897492.pem
	I1124 03:30:06.915816  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1897492.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:30:06.927828  212849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:30:06.932614  212849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:30:06.939171  212849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:30:06.945973  212849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:30:06.952829  212849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:30:06.959363  212849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:30:06.965892  212849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:30:06.972479  212849 kubeadm.go:401] StartCluster: {Name:test-preload-714953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-714953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:30:06.972567  212849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:30:06.972654  212849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:30:07.003352  212849 cri.go:89] found id: ""
	I1124 03:30:07.003467  212849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:30:07.014957  212849 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:30:07.014984  212849 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:30:07.015035  212849 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:30:07.025999  212849 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:30:07.026511  212849 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-714953" does not appear in /home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 03:30:07.026633  212849 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-185833/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-714953" cluster setting kubeconfig missing "test-preload-714953" context setting]
	I1124 03:30:07.026972  212849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/kubeconfig: {Name:mkcda9156e9d84203343cbeb8993f30147e2224f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:30:07.027559  212849 kapi.go:59] client config for test-preload-714953: &rest.Config{Host:"https://192.168.39.117:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/client.crt", KeyFile:"/home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/client.key", CAFile:"/home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 03:30:07.027993  212849 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 03:30:07.028008  212849 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 03:30:07.028013  212849 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 03:30:07.028017  212849 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 03:30:07.028021  212849 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 03:30:07.028473  212849 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:30:07.038655  212849 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.117
	I1124 03:30:07.038680  212849 kubeadm.go:1161] stopping kube-system containers ...
	I1124 03:30:07.038693  212849 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 03:30:07.038729  212849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:30:07.071645  212849 cri.go:89] found id: ""
	I1124 03:30:07.071712  212849 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 03:30:07.089102  212849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:30:07.099689  212849 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:30:07.099709  212849 kubeadm.go:158] found existing configuration files:
	
	I1124 03:30:07.099790  212849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:30:07.109668  212849 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:30:07.109728  212849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:30:07.120824  212849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:30:07.130910  212849 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:30:07.131003  212849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:30:07.141413  212849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:30:07.152121  212849 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:30:07.152180  212849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:30:07.162683  212849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:30:07.173597  212849 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:30:07.173650  212849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:30:07.183926  212849 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:30:07.194300  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:30:07.243621  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:30:08.276413  212849 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.03274677s)
	I1124 03:30:08.276502  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:30:08.511985  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:30:08.571463  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:30:08.640272  212849 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:30:08.640404  212849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:30:09.141485  212849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:30:09.641406  212849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:30:10.140671  212849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:30:10.640636  212849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:30:11.140523  212849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:30:11.173998  212849 api_server.go:72] duration metric: took 2.533737056s to wait for apiserver process to appear ...
	I1124 03:30:11.174038  212849 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:30:11.174064  212849 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I1124 03:30:13.942839  212849 api_server.go:279] https://192.168.39.117:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 03:30:13.942878  212849 api_server.go:103] status: https://192.168.39.117:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 03:30:13.942900  212849 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I1124 03:30:13.981789  212849 api_server.go:279] https://192.168.39.117:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 03:30:13.981840  212849 api_server.go:103] status: https://192.168.39.117:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 03:30:14.174182  212849 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I1124 03:30:14.183475  212849 api_server.go:279] https://192.168.39.117:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:30:14.183524  212849 api_server.go:103] status: https://192.168.39.117:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:30:14.674193  212849 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I1124 03:30:14.680771  212849 api_server.go:279] https://192.168.39.117:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:30:14.680804  212849 api_server.go:103] status: https://192.168.39.117:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:30:15.174434  212849 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I1124 03:30:15.187069  212849 api_server.go:279] https://192.168.39.117:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:30:15.187103  212849 api_server.go:103] status: https://192.168.39.117:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:30:15.674808  212849 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I1124 03:30:15.679280  212849 api_server.go:279] https://192.168.39.117:8443/healthz returned 200:
	ok
	I1124 03:30:15.686837  212849 api_server.go:141] control plane version: v1.32.0
	I1124 03:30:15.686870  212849 api_server.go:131] duration metric: took 4.512825126s to wait for apiserver health ...
	I1124 03:30:15.686881  212849 cni.go:84] Creating CNI manager for ""
	I1124 03:30:15.686888  212849 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:30:15.688832  212849 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 03:30:15.690065  212849 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 03:30:15.702364  212849 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 03:30:15.723857  212849 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:30:15.730698  212849 system_pods.go:59] 7 kube-system pods found
	I1124 03:30:15.730744  212849 system_pods.go:61] "coredns-668d6bf9bc-m5lqx" [ab104745-92bd-432d-a594-8d724a574841] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:30:15.730753  212849 system_pods.go:61] "etcd-test-preload-714953" [44dc7140-fe26-4f77-b9db-713e7a0b01d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:30:15.730760  212849 system_pods.go:61] "kube-apiserver-test-preload-714953" [1ab4c81b-60a7-45a0-8035-d8393c88f5ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:30:15.730766  212849 system_pods.go:61] "kube-controller-manager-test-preload-714953" [ab80500c-8610-408c-b0d3-b02598aa60b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:30:15.730772  212849 system_pods.go:61] "kube-proxy-fwkdw" [a55586e2-ca99-4bd4-9b19-5178fdcd6d95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:30:15.730777  212849 system_pods.go:61] "kube-scheduler-test-preload-714953" [827f9845-98f9-4c2f-9ead-7104f33bc035] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:30:15.730785  212849 system_pods.go:61] "storage-provisioner" [77ae15b6-89cd-450d-9941-4077c98d9119] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:30:15.730792  212849 system_pods.go:74] duration metric: took 6.911335ms to wait for pod list to return data ...
	I1124 03:30:15.730803  212849 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:30:15.735730  212849 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 03:30:15.735754  212849 node_conditions.go:123] node cpu capacity is 2
	I1124 03:30:15.735770  212849 node_conditions.go:105] duration metric: took 4.962708ms to run NodePressure ...
	I1124 03:30:15.735825  212849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:30:16.011917  212849 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1124 03:30:16.016464  212849 kubeadm.go:744] kubelet initialised
	I1124 03:30:16.016485  212849 kubeadm.go:745] duration metric: took 4.539495ms waiting for restarted kubelet to initialise ...
	I1124 03:30:16.016517  212849 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:30:16.037763  212849 ops.go:34] apiserver oom_adj: -16
	I1124 03:30:16.037787  212849 kubeadm.go:602] duration metric: took 9.022797076s to restartPrimaryControlPlane
	I1124 03:30:16.037799  212849 kubeadm.go:403] duration metric: took 9.06532701s to StartCluster
	I1124 03:30:16.037822  212849 settings.go:142] acquiring lock: {Name:mk66e7c24245b8d0d5ec4dc3d788350fb3f2b31a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:30:16.037905  212849 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 03:30:16.038664  212849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/kubeconfig: {Name:mkcda9156e9d84203343cbeb8993f30147e2224f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:30:16.038950  212849 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:30:16.039017  212849 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:30:16.039127  212849 addons.go:70] Setting storage-provisioner=true in profile "test-preload-714953"
	I1124 03:30:16.039154  212849 addons.go:239] Setting addon storage-provisioner=true in "test-preload-714953"
	W1124 03:30:16.039168  212849 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:30:16.039179  212849 addons.go:70] Setting default-storageclass=true in profile "test-preload-714953"
	I1124 03:30:16.039194  212849 config.go:182] Loaded profile config "test-preload-714953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1124 03:30:16.039199  212849 host.go:66] Checking if "test-preload-714953" exists ...
	I1124 03:30:16.039215  212849 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-714953"
	I1124 03:30:16.040503  212849 out.go:179] * Verifying Kubernetes components...
	I1124 03:30:16.041817  212849 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:30:16.041852  212849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:30:16.041933  212849 kapi.go:59] client config for test-preload-714953: &rest.Config{Host:"https://192.168.39.117:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/client.crt", KeyFile:"/home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/client.key", CAFile:"/home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 03:30:16.042331  212849 addons.go:239] Setting addon default-storageclass=true in "test-preload-714953"
	W1124 03:30:16.042357  212849 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:30:16.042399  212849 host.go:66] Checking if "test-preload-714953" exists ...
	I1124 03:30:16.043012  212849 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:30:16.043026  212849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:30:16.044147  212849 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:30:16.044167  212849 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:30:16.045843  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:16.046296  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:16.046323  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:16.046512  212849 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/test-preload-714953/id_rsa Username:docker}
	I1124 03:30:16.047489  212849 main.go:143] libmachine: domain test-preload-714953 has defined MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:16.047962  212849 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:17:fa:4c", ip: ""} in network mk-test-preload-714953: {Iface:virbr1 ExpiryTime:2025-11-24 04:29:58 +0000 UTC Type:0 Mac:52:54:00:17:fa:4c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-714953 Clientid:01:52:54:00:17:fa:4c}
	I1124 03:30:16.048008  212849 main.go:143] libmachine: domain test-preload-714953 has defined IP address 192.168.39.117 and MAC address 52:54:00:17:fa:4c in network mk-test-preload-714953
	I1124 03:30:16.048195  212849 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/test-preload-714953/id_rsa Username:docker}
	I1124 03:30:16.268537  212849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:30:16.291634  212849 node_ready.go:35] waiting up to 6m0s for node "test-preload-714953" to be "Ready" ...
	I1124 03:30:16.294119  212849 node_ready.go:49] node "test-preload-714953" is "Ready"
	I1124 03:30:16.294156  212849 node_ready.go:38] duration metric: took 2.46031ms for node "test-preload-714953" to be "Ready" ...
	I1124 03:30:16.294173  212849 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:30:16.294233  212849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:30:16.318851  212849 api_server.go:72] duration metric: took 279.849332ms to wait for apiserver process to appear ...
	I1124 03:30:16.318895  212849 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:30:16.318923  212849 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I1124 03:30:16.324227  212849 api_server.go:279] https://192.168.39.117:8443/healthz returned 200:
	ok
	I1124 03:30:16.325198  212849 api_server.go:141] control plane version: v1.32.0
	I1124 03:30:16.325222  212849 api_server.go:131] duration metric: took 6.319861ms to wait for apiserver health ...
	I1124 03:30:16.325231  212849 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:30:16.328861  212849 system_pods.go:59] 7 kube-system pods found
	I1124 03:30:16.328898  212849 system_pods.go:61] "coredns-668d6bf9bc-m5lqx" [ab104745-92bd-432d-a594-8d724a574841] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:30:16.328905  212849 system_pods.go:61] "etcd-test-preload-714953" [44dc7140-fe26-4f77-b9db-713e7a0b01d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:30:16.328913  212849 system_pods.go:61] "kube-apiserver-test-preload-714953" [1ab4c81b-60a7-45a0-8035-d8393c88f5ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:30:16.328919  212849 system_pods.go:61] "kube-controller-manager-test-preload-714953" [ab80500c-8610-408c-b0d3-b02598aa60b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:30:16.328925  212849 system_pods.go:61] "kube-proxy-fwkdw" [a55586e2-ca99-4bd4-9b19-5178fdcd6d95] Running
	I1124 03:30:16.328931  212849 system_pods.go:61] "kube-scheduler-test-preload-714953" [827f9845-98f9-4c2f-9ead-7104f33bc035] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:30:16.328935  212849 system_pods.go:61] "storage-provisioner" [77ae15b6-89cd-450d-9941-4077c98d9119] Running
	I1124 03:30:16.328942  212849 system_pods.go:74] duration metric: took 3.704175ms to wait for pod list to return data ...
	I1124 03:30:16.328950  212849 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:30:16.332020  212849 default_sa.go:45] found service account: "default"
	I1124 03:30:16.332047  212849 default_sa.go:55] duration metric: took 3.091584ms for default service account to be created ...
	I1124 03:30:16.332060  212849 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:30:16.335010  212849 system_pods.go:86] 7 kube-system pods found
	I1124 03:30:16.335039  212849 system_pods.go:89] "coredns-668d6bf9bc-m5lqx" [ab104745-92bd-432d-a594-8d724a574841] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:30:16.335046  212849 system_pods.go:89] "etcd-test-preload-714953" [44dc7140-fe26-4f77-b9db-713e7a0b01d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:30:16.335055  212849 system_pods.go:89] "kube-apiserver-test-preload-714953" [1ab4c81b-60a7-45a0-8035-d8393c88f5ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:30:16.335060  212849 system_pods.go:89] "kube-controller-manager-test-preload-714953" [ab80500c-8610-408c-b0d3-b02598aa60b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:30:16.335068  212849 system_pods.go:89] "kube-proxy-fwkdw" [a55586e2-ca99-4bd4-9b19-5178fdcd6d95] Running
	I1124 03:30:16.335073  212849 system_pods.go:89] "kube-scheduler-test-preload-714953" [827f9845-98f9-4c2f-9ead-7104f33bc035] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:30:16.335076  212849 system_pods.go:89] "storage-provisioner" [77ae15b6-89cd-450d-9941-4077c98d9119] Running
	I1124 03:30:16.335089  212849 system_pods.go:126] duration metric: took 3.016977ms to wait for k8s-apps to be running ...
	I1124 03:30:16.335100  212849 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:30:16.335144  212849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:30:16.369833  212849 system_svc.go:56] duration metric: took 34.721645ms WaitForService to wait for kubelet
	I1124 03:30:16.369863  212849 kubeadm.go:587] duration metric: took 330.873706ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:30:16.369881  212849 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:30:16.371101  212849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:30:16.375036  212849 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 03:30:16.375061  212849 node_conditions.go:123] node cpu capacity is 2
	I1124 03:30:16.375072  212849 node_conditions.go:105] duration metric: took 5.186696ms to run NodePressure ...
	I1124 03:30:16.375083  212849 start.go:242] waiting for startup goroutines ...
	I1124 03:30:16.378188  212849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:30:17.029145  212849 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:30:17.030794  212849 addons.go:530] duration metric: took 991.770866ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:30:17.030851  212849 start.go:247] waiting for cluster config update ...
	I1124 03:30:17.030868  212849 start.go:256] writing updated cluster config ...
	I1124 03:30:17.031229  212849 ssh_runner.go:195] Run: rm -f paused
	I1124 03:30:17.037013  212849 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:30:17.037594  212849 kapi.go:59] client config for test-preload-714953: &rest.Config{Host:"https://192.168.39.117:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/client.crt", KeyFile:"/home/jenkins/minikube-integration/21975-185833/.minikube/profiles/test-preload-714953/client.key", CAFile:"/home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 03:30:17.040622  212849 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-m5lqx" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:30:19.047676  212849 pod_ready.go:104] pod "coredns-668d6bf9bc-m5lqx" is not "Ready", error: <nil>
	I1124 03:30:21.047631  212849 pod_ready.go:94] pod "coredns-668d6bf9bc-m5lqx" is "Ready"
	I1124 03:30:21.047678  212849 pod_ready.go:86] duration metric: took 4.00701634s for pod "coredns-668d6bf9bc-m5lqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:30:21.050829  212849 pod_ready.go:83] waiting for pod "etcd-test-preload-714953" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:30:22.057092  212849 pod_ready.go:94] pod "etcd-test-preload-714953" is "Ready"
	I1124 03:30:22.057136  212849 pod_ready.go:86] duration metric: took 1.006276343s for pod "etcd-test-preload-714953" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:30:22.059554  212849 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-714953" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:30:24.066557  212849 pod_ready.go:104] pod "kube-apiserver-test-preload-714953" is not "Ready", error: <nil>
	W1124 03:30:26.567030  212849 pod_ready.go:104] pod "kube-apiserver-test-preload-714953" is not "Ready", error: <nil>
	I1124 03:30:29.065363  212849 pod_ready.go:94] pod "kube-apiserver-test-preload-714953" is "Ready"
	I1124 03:30:29.065413  212849 pod_ready.go:86] duration metric: took 7.005827599s for pod "kube-apiserver-test-preload-714953" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:30:29.067608  212849 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-714953" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:30:29.071233  212849 pod_ready.go:94] pod "kube-controller-manager-test-preload-714953" is "Ready"
	I1124 03:30:29.071260  212849 pod_ready.go:86] duration metric: took 3.623677ms for pod "kube-controller-manager-test-preload-714953" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:30:29.073493  212849 pod_ready.go:83] waiting for pod "kube-proxy-fwkdw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:30:29.079187  212849 pod_ready.go:94] pod "kube-proxy-fwkdw" is "Ready"
	I1124 03:30:29.079223  212849 pod_ready.go:86] duration metric: took 5.707749ms for pod "kube-proxy-fwkdw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:30:29.083012  212849 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-714953" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:30:29.264235  212849 pod_ready.go:94] pod "kube-scheduler-test-preload-714953" is "Ready"
	I1124 03:30:29.264273  212849 pod_ready.go:86] duration metric: took 181.225806ms for pod "kube-scheduler-test-preload-714953" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:30:29.264288  212849 pod_ready.go:40] duration metric: took 12.227225188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:30:29.312839  212849 start.go:625] kubectl: 1.34.2, cluster: 1.32.0 (minor skew: 2)
	I1124 03:30:29.314741  212849 out.go:203] 
	W1124 03:30:29.316048  212849 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.32.0.
	I1124 03:30:29.317227  212849 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:30:29.318568  212849 out.go:179] * Done! kubectl is now configured to use "test-preload-714953" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.108037133Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955030108014449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02d02b43-5ce0-4e18-acbe-514ee5ebaf47 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.109023710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd75b5b8-bd40-45cb-a501-585568c3ead1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.109095548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd75b5b8-bd40-45cb-a501-585568c3ead1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.109250200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c57ed192ee510c17d2ea427216a45a2c1caa35a24b67a97fd70438176f5307bc,PodSandboxId:2e20bddc4e52ff09de20f076e530e009701561e16c12632fc0ff169656f3db0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763955018646017376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-m5lqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab104745-92bd-432d-a594-8d724a574841,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b48b12baced21de49ac7d6e897e514f640faafc5d07c9273aa512e88039c96,PodSandboxId:bd9cd7c9ac7b97cad26d1481dcca448f0e3f36482f64edb6ce8d2df9f87cc8d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763955015048210384,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a55586e2-ca99-4bd4-9b19-5178fdcd6d95,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fe29ae7b64c3bc709690602b00ab6669218ac58008434fac7616b0449ad5e0,PodSandboxId:9214640f75bd0c0f3be4b98ef2ce959532084c3d50a570eb11008793a7811652,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763955015088009353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
ae15b6-89cd-450d-9941-4077c98d9119,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407027b0b08391b1656ec523e009c3e2370c63b89e7ce20029d449957ec424e5,PodSandboxId:322a308550b40bd4a39def1109cebd345bca13b7dd8254d1d2db3a1dd4508b8b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763955010699716809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce9ac991
dae1ac11ed117f63a3d5b27,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306153a66b6c69d82edf6c352a374b1a841145b96a2f4d479aa5c85e1830ee4,PodSandboxId:64928f5314d04ee1e610e1720c6b011dd7719e0f3c25c442610bfc2942aadd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763955010695711061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43ba6c32a27ccc6f1bd084043034839,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f16bb653871cde648908f220a3247dd58d6f271d1a95281a43e8338d37f48b,PodSandboxId:10ab6ba4684f2a5770573855606c1a89550160031673e9fae7a21b7990ba6bfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763955010686017230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0774b8983d037018bbcab5ae104915,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204f2a464ee6fda17a22efb70890553680bb1e9808eaf82b10b939ef7712d599,PodSandboxId:7e0b27135bf152b9c0dba943ad5e46f06fb3ac9f66a7a1e7b412b5964e317872,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763955010637856343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d74a81cd7ab3ea8225eb75e1465fabf,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd75b5b8-bd40-45cb-a501-585568c3ead1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.143983981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f71fcee1-1533-4941-9840-76882887b3be name=/runtime.v1.RuntimeService/Version
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.144060875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f71fcee1-1533-4941-9840-76882887b3be name=/runtime.v1.RuntimeService/Version
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.145157154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f18594ad-e78a-48e9-b921-cf9c25fc7c62 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.145604028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955030145581320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f18594ad-e78a-48e9-b921-cf9c25fc7c62 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.146533272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c733a21-309f-4134-8049-b3fb617fd148 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.146590713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c733a21-309f-4134-8049-b3fb617fd148 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.146808959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c57ed192ee510c17d2ea427216a45a2c1caa35a24b67a97fd70438176f5307bc,PodSandboxId:2e20bddc4e52ff09de20f076e530e009701561e16c12632fc0ff169656f3db0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763955018646017376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-m5lqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab104745-92bd-432d-a594-8d724a574841,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b48b12baced21de49ac7d6e897e514f640faafc5d07c9273aa512e88039c96,PodSandboxId:bd9cd7c9ac7b97cad26d1481dcca448f0e3f36482f64edb6ce8d2df9f87cc8d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763955015048210384,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a55586e2-ca99-4bd4-9b19-5178fdcd6d95,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fe29ae7b64c3bc709690602b00ab6669218ac58008434fac7616b0449ad5e0,PodSandboxId:9214640f75bd0c0f3be4b98ef2ce959532084c3d50a570eb11008793a7811652,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763955015088009353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
ae15b6-89cd-450d-9941-4077c98d9119,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407027b0b08391b1656ec523e009c3e2370c63b89e7ce20029d449957ec424e5,PodSandboxId:322a308550b40bd4a39def1109cebd345bca13b7dd8254d1d2db3a1dd4508b8b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763955010699716809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce9ac991
dae1ac11ed117f63a3d5b27,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306153a66b6c69d82edf6c352a374b1a841145b96a2f4d479aa5c85e1830ee4,PodSandboxId:64928f5314d04ee1e610e1720c6b011dd7719e0f3c25c442610bfc2942aadd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763955010695711061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43ba6c32a27ccc6f1bd084043034839,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f16bb653871cde648908f220a3247dd58d6f271d1a95281a43e8338d37f48b,PodSandboxId:10ab6ba4684f2a5770573855606c1a89550160031673e9fae7a21b7990ba6bfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763955010686017230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0774b8983d037018bbcab5ae104915,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204f2a464ee6fda17a22efb70890553680bb1e9808eaf82b10b939ef7712d599,PodSandboxId:7e0b27135bf152b9c0dba943ad5e46f06fb3ac9f66a7a1e7b412b5964e317872,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763955010637856343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d74a81cd7ab3ea8225eb75e1465fabf,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c733a21-309f-4134-8049-b3fb617fd148 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.179748551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e06b3832-fbb5-48bc-8fe7-c98c8b4f8375 name=/runtime.v1.RuntimeService/Version
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.180013119Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e06b3832-fbb5-48bc-8fe7-c98c8b4f8375 name=/runtime.v1.RuntimeService/Version
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.181289918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=533ac62c-5ca2-473d-822f-ab698938f17c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.181824118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955030181795094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=533ac62c-5ca2-473d-822f-ab698938f17c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.183134664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=453a55a0-b5b5-45c7-a344-d7f298ef9a9b name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.183255821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=453a55a0-b5b5-45c7-a344-d7f298ef9a9b name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.183567307Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c57ed192ee510c17d2ea427216a45a2c1caa35a24b67a97fd70438176f5307bc,PodSandboxId:2e20bddc4e52ff09de20f076e530e009701561e16c12632fc0ff169656f3db0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763955018646017376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-m5lqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab104745-92bd-432d-a594-8d724a574841,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b48b12baced21de49ac7d6e897e514f640faafc5d07c9273aa512e88039c96,PodSandboxId:bd9cd7c9ac7b97cad26d1481dcca448f0e3f36482f64edb6ce8d2df9f87cc8d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763955015048210384,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a55586e2-ca99-4bd4-9b19-5178fdcd6d95,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fe29ae7b64c3bc709690602b00ab6669218ac58008434fac7616b0449ad5e0,PodSandboxId:9214640f75bd0c0f3be4b98ef2ce959532084c3d50a570eb11008793a7811652,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763955015088009353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
ae15b6-89cd-450d-9941-4077c98d9119,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407027b0b08391b1656ec523e009c3e2370c63b89e7ce20029d449957ec424e5,PodSandboxId:322a308550b40bd4a39def1109cebd345bca13b7dd8254d1d2db3a1dd4508b8b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763955010699716809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce9ac991
dae1ac11ed117f63a3d5b27,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306153a66b6c69d82edf6c352a374b1a841145b96a2f4d479aa5c85e1830ee4,PodSandboxId:64928f5314d04ee1e610e1720c6b011dd7719e0f3c25c442610bfc2942aadd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763955010695711061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43ba6c32a27ccc6f1bd084043034839,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f16bb653871cde648908f220a3247dd58d6f271d1a95281a43e8338d37f48b,PodSandboxId:10ab6ba4684f2a5770573855606c1a89550160031673e9fae7a21b7990ba6bfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763955010686017230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0774b8983d037018bbcab5ae104915,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204f2a464ee6fda17a22efb70890553680bb1e9808eaf82b10b939ef7712d599,PodSandboxId:7e0b27135bf152b9c0dba943ad5e46f06fb3ac9f66a7a1e7b412b5964e317872,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763955010637856343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d74a81cd7ab3ea8225eb75e1465fabf,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=453a55a0-b5b5-45c7-a344-d7f298ef9a9b name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.211845197Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=91236d1f-3f7b-4b45-97f0-874f3e3517fa name=/runtime.v1.RuntimeService/Version
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.212079959Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=91236d1f-3f7b-4b45-97f0-874f3e3517fa name=/runtime.v1.RuntimeService/Version
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.213865413Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=120f57cf-6ac3-46d5-8626-f10b780f990e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.214579495Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955030214552933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=120f57cf-6ac3-46d5-8626-f10b780f990e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.215590821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a0d3234-5bc1-4a19-a6a0-2971090aaa1a name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.215657396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a0d3234-5bc1-4a19-a6a0-2971090aaa1a name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:30:30 test-preload-714953 crio[828]: time="2025-11-24 03:30:30.215823213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c57ed192ee510c17d2ea427216a45a2c1caa35a24b67a97fd70438176f5307bc,PodSandboxId:2e20bddc4e52ff09de20f076e530e009701561e16c12632fc0ff169656f3db0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763955018646017376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-m5lqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab104745-92bd-432d-a594-8d724a574841,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b48b12baced21de49ac7d6e897e514f640faafc5d07c9273aa512e88039c96,PodSandboxId:bd9cd7c9ac7b97cad26d1481dcca448f0e3f36482f64edb6ce8d2df9f87cc8d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763955015048210384,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a55586e2-ca99-4bd4-9b19-5178fdcd6d95,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fe29ae7b64c3bc709690602b00ab6669218ac58008434fac7616b0449ad5e0,PodSandboxId:9214640f75bd0c0f3be4b98ef2ce959532084c3d50a570eb11008793a7811652,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763955015088009353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
ae15b6-89cd-450d-9941-4077c98d9119,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407027b0b08391b1656ec523e009c3e2370c63b89e7ce20029d449957ec424e5,PodSandboxId:322a308550b40bd4a39def1109cebd345bca13b7dd8254d1d2db3a1dd4508b8b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763955010699716809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce9ac991
dae1ac11ed117f63a3d5b27,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306153a66b6c69d82edf6c352a374b1a841145b96a2f4d479aa5c85e1830ee4,PodSandboxId:64928f5314d04ee1e610e1720c6b011dd7719e0f3c25c442610bfc2942aadd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763955010695711061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43ba6c32a27ccc6f1bd084043034839,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f16bb653871cde648908f220a3247dd58d6f271d1a95281a43e8338d37f48b,PodSandboxId:10ab6ba4684f2a5770573855606c1a89550160031673e9fae7a21b7990ba6bfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763955010686017230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0774b8983d037018bbcab5ae104915,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204f2a464ee6fda17a22efb70890553680bb1e9808eaf82b10b939ef7712d599,PodSandboxId:7e0b27135bf152b9c0dba943ad5e46f06fb3ac9f66a7a1e7b412b5964e317872,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763955010637856343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-714953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d74a81cd7ab3ea8225eb75e1465fabf,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a0d3234-5bc1-4a19-a6a0-2971090aaa1a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	c57ed192ee510       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   1                   2e20bddc4e52f       coredns-668d6bf9bc-m5lqx                      kube-system
	62fe29ae7b64c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       2                   9214640f75bd0       storage-provisioner                           kube-system
	08b48b12baced       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   bd9cd7c9ac7b9       kube-proxy-fwkdw                              kube-system
	407027b0b0839       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   322a308550b40       kube-scheduler-test-preload-714953            kube-system
	3306153a66b6c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   64928f5314d04       etcd-test-preload-714953                      kube-system
	e9f16bb653871       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   10ab6ba4684f2       kube-controller-manager-test-preload-714953   kube-system
	204f2a464ee6f       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   7e0b27135bf15       kube-apiserver-test-preload-714953            kube-system
	
	
	==> coredns [c57ed192ee510c17d2ea427216a45a2c1caa35a24b67a97fd70438176f5307bc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33661 - 18121 "HINFO IN 115763128736171323.3387108537363186703. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.040326831s
	
	
	==> describe nodes <==
	Name:               test-preload-714953
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-714953
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=test-preload-714953
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_28_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:28:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-714953
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:30:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:30:16 +0000   Mon, 24 Nov 2025 03:28:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:30:16 +0000   Mon, 24 Nov 2025 03:28:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:30:16 +0000   Mon, 24 Nov 2025 03:28:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:30:16 +0000   Mon, 24 Nov 2025 03:30:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    test-preload-714953
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6ed2e2e051543c2ba8962c6e5779730
	  System UUID:                a6ed2e2e-0515-43c2-ba89-62c6e5779730
	  Boot ID:                    5249517b-1329-441f-b638-8c76ef880d48
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-m5lqx                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     100s
	  kube-system                 etcd-test-preload-714953                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         107s
	  kube-system                 kube-apiserver-test-preload-714953             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-test-preload-714953    200m (10%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-fwkdw                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-test-preload-714953             100m (5%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 99s                  kube-proxy       
	  Normal   Starting                 15s                  kube-proxy       
	  Normal   Starting                 112s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  111s (x8 over 112s)  kubelet          Node test-preload-714953 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s (x8 over 112s)  kubelet          Node test-preload-714953 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s (x7 over 112s)  kubelet          Node test-preload-714953 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    105s                 kubelet          Node test-preload-714953 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  105s                 kubelet          Node test-preload-714953 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     105s                 kubelet          Node test-preload-714953 status is now: NodeHasSufficientPID
	  Normal   Starting                 105s                 kubelet          Starting kubelet.
	  Normal   NodeReady                104s                 kubelet          Node test-preload-714953 status is now: NodeReady
	  Normal   RegisteredNode           102s                 node-controller  Node test-preload-714953 event: Registered Node test-preload-714953 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-714953 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-714953 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-714953 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                  kubelet          Node test-preload-714953 has been rebooted, boot id: 5249517b-1329-441f-b638-8c76ef880d48
	  Normal   RegisteredNode           13s                  node-controller  Node test-preload-714953 event: Registered Node test-preload-714953 in Controller
	
	
	==> dmesg <==
	[Nov24 03:29] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000045] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007644] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.964798] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov24 03:30] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.093799] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.491231] kauditd_printk_skb: 177 callbacks suppressed
	[  +2.335355] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [3306153a66b6c69d82edf6c352a374b1a841145b96a2f4d479aa5c85e1830ee4] <==
	{"level":"info","ts":"2025-11-24T03:30:11.154541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 switched to configuration voters=(15591163477497366083)"}
	{"level":"info","ts":"2025-11-24T03:30:11.159567Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"44831ab0f42e7be7","local-member-id":"d85ef093c7464643","added-peer-id":"d85ef093c7464643","added-peer-peer-urls":["https://192.168.39.117:2380"]}
	{"level":"info","ts":"2025-11-24T03:30:11.159682Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"44831ab0f42e7be7","local-member-id":"d85ef093c7464643","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:30:11.159727Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:30:11.168571Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T03:30:11.168829Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"d85ef093c7464643","initial-advertise-peer-urls":["https://192.168.39.117:2380"],"listen-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.117:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T03:30:11.168874Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T03:30:11.168946Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2025-11-24T03:30:11.168966Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2025-11-24T03:30:12.792666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T03:30:12.792723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T03:30:12.792771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 received MsgPreVoteResp from d85ef093c7464643 at term 2"}
	{"level":"info","ts":"2025-11-24T03:30:12.792785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T03:30:12.792795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 received MsgVoteResp from d85ef093c7464643 at term 3"}
	{"level":"info","ts":"2025-11-24T03:30:12.792803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became leader at term 3"}
	{"level":"info","ts":"2025-11-24T03:30:12.792810Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d85ef093c7464643 elected leader d85ef093c7464643 at term 3"}
	{"level":"info","ts":"2025-11-24T03:30:12.797667Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"d85ef093c7464643","local-member-attributes":"{Name:test-preload-714953 ClientURLs:[https://192.168.39.117:2379]}","request-path":"/0/members/d85ef093c7464643/attributes","cluster-id":"44831ab0f42e7be7","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T03:30:12.797687Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T03:30:12.797955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T03:30:12.798236Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T03:30:12.798279Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T03:30:12.798593Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T03:30:12.798768Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T03:30:12.799166Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.117:2379"}
	{"level":"info","ts":"2025-11-24T03:30:12.799376Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:30:30 up 0 min,  0 users,  load average: 0.83, 0.22, 0.07
	Linux test-preload-714953 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Nov 24 01:33:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [204f2a464ee6fda17a22efb70890553680bb1e9808eaf82b10b939ef7712d599] <==
	I1124 03:30:14.001781       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1124 03:30:14.001820       1 policy_source.go:240] refreshing policies
	I1124 03:30:14.013389       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:30:14.015640       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1124 03:30:14.015685       1 aggregator.go:171] initial CRD sync complete...
	I1124 03:30:14.015691       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 03:30:14.015696       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:30:14.015701       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:30:14.060662       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:30:14.063253       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1124 03:30:14.063824       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 03:30:14.063976       1 shared_informer.go:320] Caches are synced for configmaps
	I1124 03:30:14.064002       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 03:30:14.064008       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 03:30:14.066308       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1124 03:30:14.070188       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 03:30:14.623283       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1124 03:30:14.867216       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:30:15.815075       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1124 03:30:15.848395       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1124 03:30:15.883709       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:30:15.890307       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:30:17.404312       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1124 03:30:17.504065       1 controller.go:615] quota admission added evaluator for: endpoints
	I1124 03:30:17.553536       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e9f16bb653871cde648908f220a3247dd58d6f271d1a95281a43e8338d37f48b] <==
	I1124 03:30:17.200273       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-714953"
	I1124 03:30:17.200696       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1124 03:30:17.201175       1 shared_informer.go:320] Caches are synced for expand
	I1124 03:30:17.203821       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1124 03:30:17.204011       1 shared_informer.go:320] Caches are synced for endpoint
	I1124 03:30:17.208166       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1124 03:30:17.208545       1 shared_informer.go:320] Caches are synced for resource quota
	I1124 03:30:17.211749       1 shared_informer.go:320] Caches are synced for daemon sets
	I1124 03:30:17.214086       1 shared_informer.go:320] Caches are synced for garbage collector
	I1124 03:30:17.214136       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:30:17.214143       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:30:17.214706       1 shared_informer.go:320] Caches are synced for job
	I1124 03:30:17.245799       1 shared_informer.go:320] Caches are synced for garbage collector
	I1124 03:30:17.250316       1 shared_informer.go:320] Caches are synced for deployment
	I1124 03:30:17.251718       1 shared_informer.go:320] Caches are synced for disruption
	I1124 03:30:17.252717       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1124 03:30:17.255128       1 shared_informer.go:320] Caches are synced for taint
	I1124 03:30:17.255196       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 03:30:17.255252       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-714953"
	I1124 03:30:17.255284       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 03:30:17.412727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="211.978011ms"
	I1124 03:30:17.412852       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="60.094µs"
	I1124 03:30:19.740476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="89.734µs"
	I1124 03:30:20.810986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="17.495328ms"
	I1124 03:30:20.811148       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.939µs"
	
	
	==> kube-proxy [08b48b12baced21de49ac7d6e897e514f640faafc5d07c9273aa512e88039c96] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1124 03:30:15.380270       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1124 03:30:15.389414       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.117"]
	E1124 03:30:15.389561       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:30:15.422929       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1124 03:30:15.422982       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 03:30:15.423005       1 server_linux.go:170] "Using iptables Proxier"
	I1124 03:30:15.425688       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:30:15.425958       1 server.go:497] "Version info" version="v1.32.0"
	I1124 03:30:15.425985       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:30:15.427937       1 config.go:199] "Starting service config controller"
	I1124 03:30:15.427980       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1124 03:30:15.428009       1 config.go:105] "Starting endpoint slice config controller"
	I1124 03:30:15.428028       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1124 03:30:15.428575       1 config.go:329] "Starting node config controller"
	I1124 03:30:15.428601       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1124 03:30:15.528757       1 shared_informer.go:320] Caches are synced for node config
	I1124 03:30:15.528787       1 shared_informer.go:320] Caches are synced for service config
	I1124 03:30:15.528796       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [407027b0b08391b1656ec523e009c3e2370c63b89e7ce20029d449957ec424e5] <==
	I1124 03:30:11.889510       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:30:13.958154       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:30:13.958242       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:30:13.958253       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:30:13.958260       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:30:14.017320       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1124 03:30:14.017388       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:30:14.020747       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:30:14.020768       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1124 03:30:14.020783       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:30:14.020840       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 03:30:14.121233       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: I1124 03:30:14.108173    1160 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-714953"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: I1124 03:30:14.108198    1160 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: I1124 03:30:14.109566    1160 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: I1124 03:30:14.111828    1160 setters.go:602] "Node became not ready" node="test-preload-714953" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T03:30:14Z","lastTransitionTime":"2025-11-24T03:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: E1124 03:30:14.119195    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-714953\" already exists" pod="kube-system/kube-scheduler-test-preload-714953"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: I1124 03:30:14.119245    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-714953"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: E1124 03:30:14.128744    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-714953\" already exists" pod="kube-system/etcd-test-preload-714953"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: I1124 03:30:14.565608    1160 apiserver.go:52] "Watching apiserver"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: E1124 03:30:14.569916    1160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-m5lqx" podUID="ab104745-92bd-432d-a594-8d724a574841"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: I1124 03:30:14.584658    1160 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: I1124 03:30:14.618892    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a55586e2-ca99-4bd4-9b19-5178fdcd6d95-lib-modules\") pod \"kube-proxy-fwkdw\" (UID: \"a55586e2-ca99-4bd4-9b19-5178fdcd6d95\") " pod="kube-system/kube-proxy-fwkdw"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: I1124 03:30:14.618951    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a55586e2-ca99-4bd4-9b19-5178fdcd6d95-xtables-lock\") pod \"kube-proxy-fwkdw\" (UID: \"a55586e2-ca99-4bd4-9b19-5178fdcd6d95\") " pod="kube-system/kube-proxy-fwkdw"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: I1124 03:30:14.619018    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/77ae15b6-89cd-450d-9941-4077c98d9119-tmp\") pod \"storage-provisioner\" (UID: \"77ae15b6-89cd-450d-9941-4077c98d9119\") " pod="kube-system/storage-provisioner"
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: E1124 03:30:14.619354    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 03:30:14 test-preload-714953 kubelet[1160]: E1124 03:30:14.621898    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab104745-92bd-432d-a594-8d724a574841-config-volume podName:ab104745-92bd-432d-a594-8d724a574841 nodeName:}" failed. No retries permitted until 2025-11-24 03:30:15.121878482 +0000 UTC m=+6.653958218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ab104745-92bd-432d-a594-8d724a574841-config-volume") pod "coredns-668d6bf9bc-m5lqx" (UID: "ab104745-92bd-432d-a594-8d724a574841") : object "kube-system"/"coredns" not registered
	Nov 24 03:30:15 test-preload-714953 kubelet[1160]: E1124 03:30:15.123757    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 03:30:15 test-preload-714953 kubelet[1160]: E1124 03:30:15.123968    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab104745-92bd-432d-a594-8d724a574841-config-volume podName:ab104745-92bd-432d-a594-8d724a574841 nodeName:}" failed. No retries permitted until 2025-11-24 03:30:16.123880659 +0000 UTC m=+7.655960407 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ab104745-92bd-432d-a594-8d724a574841-config-volume") pod "coredns-668d6bf9bc-m5lqx" (UID: "ab104745-92bd-432d-a594-8d724a574841") : object "kube-system"/"coredns" not registered
	Nov 24 03:30:16 test-preload-714953 kubelet[1160]: I1124 03:30:16.011877    1160 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 24 03:30:16 test-preload-714953 kubelet[1160]: E1124 03:30:16.131340    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 03:30:16 test-preload-714953 kubelet[1160]: E1124 03:30:16.131419    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab104745-92bd-432d-a594-8d724a574841-config-volume podName:ab104745-92bd-432d-a594-8d724a574841 nodeName:}" failed. No retries permitted until 2025-11-24 03:30:18.131406412 +0000 UTC m=+9.663486147 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ab104745-92bd-432d-a594-8d724a574841-config-volume") pod "coredns-668d6bf9bc-m5lqx" (UID: "ab104745-92bd-432d-a594-8d724a574841") : object "kube-system"/"coredns" not registered
	Nov 24 03:30:18 test-preload-714953 kubelet[1160]: E1124 03:30:18.647889    1160 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955018646639479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 24 03:30:18 test-preload-714953 kubelet[1160]: E1124 03:30:18.647939    1160 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955018646639479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 24 03:30:20 test-preload-714953 kubelet[1160]: I1124 03:30:20.725958    1160 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 03:30:28 test-preload-714953 kubelet[1160]: E1124 03:30:28.650408    1160 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955028649831834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 24 03:30:28 test-preload-714953 kubelet[1160]: E1124 03:30:28.650522    1160 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955028649831834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [62fe29ae7b64c3bc709690602b00ab6669218ac58008434fac7616b0449ad5e0] <==
	I1124 03:30:15.261073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-714953 -n test-preload-714953
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-714953 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-714953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-714953
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-714953: (1.051090415s)
--- FAIL: TestPreload (153.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (67.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-338254 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-338254 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.844919072s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-338254] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-338254" primary control-plane node in "pause-338254" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-338254" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:39:44.869063  222063 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:39:44.869357  222063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:44.869381  222063 out.go:374] Setting ErrFile to fd 2...
	I1124 03:39:44.869387  222063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:44.869687  222063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:39:44.870842  222063 out.go:368] Setting JSON to false
	I1124 03:39:44.871860  222063 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12125,"bootTime":1763943460,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:39:44.871929  222063 start.go:143] virtualization: kvm guest
	I1124 03:39:44.873254  222063 out.go:179] * [pause-338254] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:39:44.874625  222063 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:39:44.874625  222063 notify.go:221] Checking for updates...
	I1124 03:39:44.876575  222063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:39:44.877736  222063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 03:39:44.880206  222063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 03:39:44.881511  222063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:39:44.882801  222063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:39:44.884281  222063 config.go:182] Loaded profile config "pause-338254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:39:44.884767  222063 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:39:44.923472  222063 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 03:39:44.924576  222063 start.go:309] selected driver: kvm2
	I1124 03:39:44.924594  222063 start.go:927] validating driver "kvm2" against &{Name:pause-338254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-338254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:39:44.924749  222063 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:39:44.925721  222063 cni.go:84] Creating CNI manager for ""
	I1124 03:39:44.925800  222063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:39:44.925869  222063 start.go:353] cluster config:
	{Name:pause-338254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-338254 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:39:44.926009  222063 iso.go:125] acquiring lock: {Name:mk63ee8f30093c8c7d0696dd2486a8eb0d8bd024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:39:44.927266  222063 out.go:179] * Starting "pause-338254" primary control-plane node in "pause-338254" cluster
	I1124 03:39:44.928208  222063 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:39:44.928250  222063 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:39:44.928264  222063 cache.go:65] Caching tarball of preloaded images
	I1124 03:39:44.928364  222063 preload.go:238] Found /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:39:44.928402  222063 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:39:44.928592  222063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/config.json ...
	I1124 03:39:44.928825  222063 start.go:360] acquireMachinesLock for pause-338254: {Name:mk6edb9cd27540c3b670af896ffc377aa954769e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 03:40:03.512337  222063 start.go:364] duration metric: took 18.583431842s to acquireMachinesLock for "pause-338254"
	I1124 03:40:03.512425  222063 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:40:03.512438  222063 fix.go:54] fixHost starting: 
	I1124 03:40:03.514902  222063 fix.go:112] recreateIfNeeded on pause-338254: state=Running err=<nil>
	W1124 03:40:03.514944  222063 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:40:03.516608  222063 out.go:252] * Updating the running kvm2 "pause-338254" VM ...
	I1124 03:40:03.516645  222063 machine.go:94] provisionDockerMachine start ...
	I1124 03:40:03.521237  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.521849  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.521881  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.522086  222063 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.522445  222063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1124 03:40:03.522458  222063 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:40:03.634652  222063 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-338254
	
	I1124 03:40:03.634700  222063 buildroot.go:166] provisioning hostname "pause-338254"
	I1124 03:40:03.638433  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.638937  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.638968  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.639164  222063 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.639482  222063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1124 03:40:03.639503  222063 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-338254 && echo "pause-338254" | sudo tee /etc/hostname
	I1124 03:40:03.776003  222063 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-338254
	
	I1124 03:40:03.779278  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.779751  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.779779  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.779976  222063 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.780256  222063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1124 03:40:03.780275  222063 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-338254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-338254/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-338254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:40:03.889655  222063 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:40:03.889689  222063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21975-185833/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-185833/.minikube}
	I1124 03:40:03.889739  222063 buildroot.go:174] setting up certificates
	I1124 03:40:03.889754  222063 provision.go:84] configureAuth start
	I1124 03:40:03.894105  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.894686  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.894727  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.898794  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.899599  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.899649  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.899841  222063 provision.go:143] copyHostCerts
	I1124 03:40:03.899924  222063 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem, removing ...
	I1124 03:40:03.899945  222063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem
	I1124 03:40:03.900023  222063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem (1078 bytes)
	I1124 03:40:03.900191  222063 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem, removing ...
	I1124 03:40:03.900208  222063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem
	I1124 03:40:03.900264  222063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem (1123 bytes)
	I1124 03:40:03.900350  222063 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem, removing ...
	I1124 03:40:03.900360  222063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem
	I1124 03:40:03.900421  222063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem (1675 bytes)
	I1124 03:40:03.900503  222063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem org=jenkins.pause-338254 san=[127.0.0.1 192.168.39.187 localhost minikube pause-338254]
	I1124 03:40:03.983993  222063 provision.go:177] copyRemoteCerts
	I1124 03:40:03.984088  222063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:40:03.987664  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.988301  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.988341  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.988549  222063 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/pause-338254/id_rsa Username:docker}
	I1124 03:40:04.079313  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:40:04.115013  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 03:40:04.152539  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:40:04.193229  222063 provision.go:87] duration metric: took 303.45674ms to configureAuth
	I1124 03:40:04.193269  222063 buildroot.go:189] setting minikube options for container-runtime
	I1124 03:40:04.193570  222063 config.go:182] Loaded profile config "pause-338254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:40:04.197162  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:04.197668  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:04.197704  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:04.197955  222063 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:04.198285  222063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1124 03:40:04.198314  222063 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:40:09.943926  222063 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:40:09.944018  222063 machine.go:97] duration metric: took 6.427361082s to provisionDockerMachine
	I1124 03:40:09.944041  222063 start.go:293] postStartSetup for "pause-338254" (driver="kvm2")
	I1124 03:40:09.944127  222063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:40:09.944231  222063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:40:09.947662  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:09.948208  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:09.948249  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:09.948440  222063 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/pause-338254/id_rsa Username:docker}
	I1124 03:40:10.041119  222063 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:40:10.047558  222063 info.go:137] Remote host: Buildroot 2025.02
	I1124 03:40:10.047595  222063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/addons for local assets ...
	I1124 03:40:10.047678  222063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/files for local assets ...
	I1124 03:40:10.047776  222063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem -> 1897492.pem in /etc/ssl/certs
	I1124 03:40:10.047918  222063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:40:10.064415  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem --> /etc/ssl/certs/1897492.pem (1708 bytes)
	I1124 03:40:10.099554  222063 start.go:296] duration metric: took 155.491179ms for postStartSetup
	I1124 03:40:10.099624  222063 fix.go:56] duration metric: took 6.587166942s for fixHost
	I1124 03:40:10.102980  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.103506  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:10.103556  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.103774  222063 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:10.104073  222063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1124 03:40:10.104085  222063 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 03:40:10.223487  222063 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763955610.219281221
	
	I1124 03:40:10.223543  222063 fix.go:216] guest clock: 1763955610.219281221
	I1124 03:40:10.223556  222063 fix.go:229] Guest: 2025-11-24 03:40:10.219281221 +0000 UTC Remote: 2025-11-24 03:40:10.099629369 +0000 UTC m=+25.294421385 (delta=119.651852ms)
	I1124 03:40:10.223583  222063 fix.go:200] guest clock delta is within tolerance: 119.651852ms
	I1124 03:40:10.223591  222063 start.go:83] releasing machines lock for "pause-338254", held for 6.711196524s
	I1124 03:40:10.227781  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.228322  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:10.228359  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.229330  222063 ssh_runner.go:195] Run: cat /version.json
	I1124 03:40:10.229610  222063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:40:10.234293  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.234644  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.235132  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:10.235168  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.235409  222063 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/pause-338254/id_rsa Username:docker}
	I1124 03:40:10.235978  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:10.236013  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.236255  222063 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/pause-338254/id_rsa Username:docker}
	I1124 03:40:10.355172  222063 ssh_runner.go:195] Run: systemctl --version
	I1124 03:40:10.368902  222063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:40:10.540967  222063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:40:10.551722  222063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:40:10.551831  222063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:40:10.566234  222063 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:40:10.566267  222063 start.go:496] detecting cgroup driver to use...
	I1124 03:40:10.566338  222063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:40:10.597450  222063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:40:10.625760  222063 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:40:10.625849  222063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:40:10.655727  222063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:40:10.672934  222063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:40:10.866745  222063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:40:11.067857  222063 docker.go:234] disabling docker service ...
	I1124 03:40:11.067958  222063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:40:11.109348  222063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:40:11.131672  222063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:40:11.334180  222063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:40:11.525859  222063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:40:11.543942  222063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:40:11.569662  222063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:40:11.569739  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.584795  222063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 03:40:11.585235  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.598966  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.625060  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.646293  222063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:40:11.663717  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.680277  222063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.697052  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.711133  222063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:40:11.722490  222063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:40:11.734028  222063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:11.918683  222063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:40:12.184068  222063 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:40:12.184265  222063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:40:12.193189  222063 start.go:564] Will wait 60s for crictl version
	I1124 03:40:12.193285  222063 ssh_runner.go:195] Run: which crictl
	I1124 03:40:12.203863  222063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 03:40:12.275618  222063 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 03:40:12.275740  222063 ssh_runner.go:195] Run: crio --version
	I1124 03:40:12.321647  222063 ssh_runner.go:195] Run: crio --version
	I1124 03:40:12.366906  222063 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1124 03:40:12.371731  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:12.372221  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:12.372244  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:12.372483  222063 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1124 03:40:12.379476  222063 kubeadm.go:884] updating cluster {Name:pause-338254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-338254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:40:12.379671  222063 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:40:12.379745  222063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:12.435439  222063 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:40:12.435471  222063 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:40:12.435554  222063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:12.475715  222063 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:40:12.475753  222063 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:40:12.475764  222063 kubeadm.go:935] updating node { 192.168.39.187 8443 v1.34.1 crio true true} ...
	I1124 03:40:12.475911  222063 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-338254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-338254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:40:12.476014  222063 ssh_runner.go:195] Run: crio config
	I1124 03:40:12.540400  222063 cni.go:84] Creating CNI manager for ""
	I1124 03:40:12.540432  222063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:40:12.540457  222063 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:40:12.540488  222063 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-338254 NodeName:pause-338254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:40:12.540669  222063 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-338254"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.187"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:40:12.540770  222063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:40:12.561006  222063 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:40:12.561091  222063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:12.578250  222063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1124 03:40:12.612215  222063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:12.643652  222063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1124 03:40:12.670956  222063 ssh_runner.go:195] Run: grep 192.168.39.187	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:12.677964  222063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:12.882890  222063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:12.904724  222063 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254 for IP: 192.168.39.187
	I1124 03:40:12.904744  222063 certs.go:195] generating shared ca certs ...
	I1124 03:40:12.904769  222063 certs.go:227] acquiring lock for ca certs: {Name:mk173959192d8348177ca5710cbe68cc42fae47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:12.904966  222063 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key
	I1124 03:40:12.905052  222063 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key
	I1124 03:40:12.905067  222063 certs.go:257] generating profile certs ...
	I1124 03:40:12.905221  222063 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/client.key
	I1124 03:40:12.905352  222063 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/apiserver.key.c11b338a
	I1124 03:40:12.905445  222063 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/proxy-client.key
	I1124 03:40:12.905621  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749.pem (1338 bytes)
	W1124 03:40:12.905679  222063 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:12.905693  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 03:40:12.905738  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:12.905780  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:12.905809  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:12.905871  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem (1708 bytes)
	I1124 03:40:12.906763  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:12.971486  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 03:40:13.008162  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:13.039860  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:13.073664  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 03:40:13.110448  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:40:13.153930  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:13.283788  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:13.341414  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749.pem --> /usr/share/ca-certificates/189749.pem (1338 bytes)
	I1124 03:40:13.400194  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem --> /usr/share/ca-certificates/1897492.pem (1708 bytes)
	I1124 03:40:13.480265  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:13.599416  222063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:13.692426  222063 ssh_runner.go:195] Run: openssl version
	I1124 03:40:13.714045  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1897492.pem && ln -fs /usr/share/ca-certificates/1897492.pem /etc/ssl/certs/1897492.pem"
	I1124 03:40:13.750423  222063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1897492.pem
	I1124 03:40:13.765536  222063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:47 /usr/share/ca-certificates/1897492.pem
	I1124 03:40:13.765751  222063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1897492.pem
	I1124 03:40:13.788697  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1897492.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:13.824836  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:13.873459  222063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:13.884945  222063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:39 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:13.885029  222063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:13.900126  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:13.930441  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/189749.pem && ln -fs /usr/share/ca-certificates/189749.pem /etc/ssl/certs/189749.pem"
	I1124 03:40:13.959840  222063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/189749.pem
	I1124 03:40:13.978928  222063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:47 /usr/share/ca-certificates/189749.pem
	I1124 03:40:13.979015  222063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/189749.pem
	I1124 03:40:14.008504  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/189749.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:14.061447  222063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:14.083797  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:40:14.110385  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:40:14.132611  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:40:14.152598  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:40:14.204731  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:40:14.235528  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:40:14.294461  222063 kubeadm.go:401] StartCluster: {Name:pause-338254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-338254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:14.294641  222063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:14.294735  222063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:14.450649  222063 cri.go:89] found id: "1745a9dfdd711a8ee834728c86110dd2e50c0839b6a2ae5b7741ca646e4fa2cf"
	I1124 03:40:14.450683  222063 cri.go:89] found id: "6aa3ded1ef102324c911cc3a9284ffb02ad584ceaaa53d6767459fedd68b5ab0"
	I1124 03:40:14.450690  222063 cri.go:89] found id: "6dd9863b4e925db23c7f2417e2265709ea171629350bacc2b6f52cc973632214"
	I1124 03:40:14.450695  222063 cri.go:89] found id: "d838ec10bf519b6238f83e68f9bb42b155709dcb3557d8ef647b0a73c31cd0aa"
	I1124 03:40:14.450699  222063 cri.go:89] found id: "7b70cc751747d1a5ed60bd015f3df7de1c179505c3e57ab74febaa54f4092338"
	I1124 03:40:14.450707  222063 cri.go:89] found id: "3b6eb5c0748537dff2962b9590a1dbc87049ca8d280782be74af20026bdc6cac"
	I1124 03:40:14.450712  222063 cri.go:89] found id: "3020fb059fc9d0157f9512beada229b8ace02691eddde40e7fc146e1522e0734"
	I1124 03:40:14.450716  222063 cri.go:89] found id: "962ce9fe41009440988027b6ec6a31651dcca599dae679db54773a25c13da3fa"
	I1124 03:40:14.450720  222063 cri.go:89] found id: ""
	I1124 03:40:14.450780  222063 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-338254 -n pause-338254
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-338254 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-338254 logs -n 25: (1.350217175s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-646844 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ stop    │ -p no-preload-646844 --alsologtostderr -v=3                                                                                                                            │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-780317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                               │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ stop    │ -p embed-certs-780317 --alsologtostderr -v=3                                                                                                                           │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                        │ kubernetes-upgrade-469670    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │                     │
	│ start   │ -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                 │ kubernetes-upgrade-469670    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ delete  │ -p kubernetes-upgrade-469670                                                                                                                                           │ kubernetes-upgrade-469670    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ start   │ -p pause-338254 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                │ pause-338254                 │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p cert-expiration-734487 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                │ cert-expiration-734487       │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p cert-expiration-734487                                                                                                                                              │ cert-expiration-734487       │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p default-k8s-diff-port-871319 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-871319 │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ addons  │ enable dashboard -p no-preload-646844 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                           │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p no-preload-646844 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-780317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                          │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p embed-certs-780317 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ start   │ -p pause-338254 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                         │ pause-338254                 │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-871319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                     │ default-k8s-diff-port-871319 │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ stop    │ -p default-k8s-diff-port-871319 --alsologtostderr -v=3                                                                                                                 │ default-k8s-diff-port-871319 │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │                     │
	│ image   │ no-preload-646844 image list --format=json                                                                                                                             │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ pause   │ -p no-preload-646844 --alsologtostderr -v=1                                                                                                                            │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ unpause │ -p no-preload-646844 --alsologtostderr -v=1                                                                                                                            │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ delete  │ -p no-preload-646844                                                                                                                                                   │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ image   │ embed-certs-780317 image list --format=json                                                                                                                            │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ delete  │ -p no-preload-646844                                                                                                                                                   │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ pause   │ -p embed-certs-780317 --alsologtostderr -v=1                                                                                                                           │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:39:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:39:44.869063  222063 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:39:44.869357  222063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:44.869381  222063 out.go:374] Setting ErrFile to fd 2...
	I1124 03:39:44.869387  222063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:44.869687  222063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:39:44.870842  222063 out.go:368] Setting JSON to false
	I1124 03:39:44.871860  222063 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12125,"bootTime":1763943460,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:39:44.871929  222063 start.go:143] virtualization: kvm guest
	I1124 03:39:44.873254  222063 out.go:179] * [pause-338254] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:39:44.874625  222063 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:39:44.874625  222063 notify.go:221] Checking for updates...
	I1124 03:39:44.876575  222063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:39:44.877736  222063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 03:39:44.880206  222063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 03:39:44.881511  222063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:39:44.882801  222063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:39:44.884281  222063 config.go:182] Loaded profile config "pause-338254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:39:44.884767  222063 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:39:44.923472  222063 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 03:39:44.924576  222063 start.go:309] selected driver: kvm2
	I1124 03:39:44.924594  222063 start.go:927] validating driver "kvm2" against &{Name:pause-338254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-338254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:39:44.924749  222063 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:39:44.925721  222063 cni.go:84] Creating CNI manager for ""
	I1124 03:39:44.925800  222063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:39:44.925869  222063 start.go:353] cluster config:
	{Name:pause-338254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-338254 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:39:44.926009  222063 iso.go:125] acquiring lock: {Name:mk63ee8f30093c8c7d0696dd2486a8eb0d8bd024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:39:44.927266  222063 out.go:179] * Starting "pause-338254" primary control-plane node in "pause-338254" cluster
	I1124 03:39:45.549670  221986 start.go:364] duration metric: took 10.718710451s to acquireMachinesLock for "embed-certs-780317"
	I1124 03:39:45.549725  221986 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:39:45.549733  221986 fix.go:54] fixHost starting: 
	I1124 03:39:45.552388  221986 fix.go:112] recreateIfNeeded on embed-certs-780317: state=Stopped err=<nil>
	W1124 03:39:45.552423  221986 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:39:44.258788  221785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:39:44.263398  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.264024  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:44.264066  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.264401  221785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/config.json ...
	I1124 03:39:44.264670  221785 machine.go:94] provisionDockerMachine start ...
	I1124 03:39:44.267732  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.268156  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:44.268191  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.268438  221785 main.go:143] libmachine: Using SSH client type: native
	I1124 03:39:44.268758  221785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.5 22 <nil> <nil>}
	I1124 03:39:44.268772  221785 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:39:44.398931  221785 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 03:39:44.399046  221785 buildroot.go:166] provisioning hostname "no-preload-646844"
	I1124 03:39:44.402761  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.403265  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:44.403299  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.403545  221785 main.go:143] libmachine: Using SSH client type: native
	I1124 03:39:44.403867  221785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.5 22 <nil> <nil>}
	I1124 03:39:44.403885  221785 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-646844 && echo "no-preload-646844" | sudo tee /etc/hostname
	I1124 03:39:44.557049  221785 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-646844
	
	I1124 03:39:44.560438  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.560957  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:44.560991  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.561232  221785 main.go:143] libmachine: Using SSH client type: native
	I1124 03:39:44.561557  221785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.5 22 <nil> <nil>}
	I1124 03:39:44.561585  221785 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-646844' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-646844/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-646844' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:39:44.717034  221785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:39:44.717073  221785 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21975-185833/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-185833/.minikube}
	I1124 03:39:44.717128  221785 buildroot.go:174] setting up certificates
	I1124 03:39:44.717155  221785 provision.go:84] configureAuth start
	I1124 03:39:44.720968  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.721493  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:44.721531  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.724581  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.725089  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:44.725125  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.725305  221785 provision.go:143] copyHostCerts
	I1124 03:39:44.725388  221785 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem, removing ...
	I1124 03:39:44.725413  221785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem
	I1124 03:39:44.725490  221785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem (1123 bytes)
	I1124 03:39:44.725591  221785 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem, removing ...
	I1124 03:39:44.725600  221785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem
	I1124 03:39:44.725623  221785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem (1675 bytes)
	I1124 03:39:44.725683  221785 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem, removing ...
	I1124 03:39:44.725694  221785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem
	I1124 03:39:44.726195  221785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem (1078 bytes)
	I1124 03:39:44.726312  221785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem org=jenkins.no-preload-646844 san=[127.0.0.1 192.168.72.5 localhost minikube no-preload-646844]
	I1124 03:39:44.797593  221785 provision.go:177] copyRemoteCerts
	I1124 03:39:44.797680  221785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:39:44.803961  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.804478  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:44.804513  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:44.804707  221785 sshutil.go:53] new ssh client: &{IP:192.168.72.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/no-preload-646844/id_rsa Username:docker}
	I1124 03:39:44.908326  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:39:44.944803  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:39:44.980934  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:39:45.016776  221785 provision.go:87] duration metric: took 299.603368ms to configureAuth
	I1124 03:39:45.016811  221785 buildroot.go:189] setting minikube options for container-runtime
	I1124 03:39:45.017066  221785 config.go:182] Loaded profile config "no-preload-646844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:39:45.020159  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.020580  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:45.020607  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.020856  221785 main.go:143] libmachine: Using SSH client type: native
	I1124 03:39:45.021133  221785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.5 22 <nil> <nil>}
	I1124 03:39:45.021154  221785 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:39:45.272792  221785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:39:45.272825  221785 machine.go:97] duration metric: took 1.008134151s to provisionDockerMachine
	I1124 03:39:45.272842  221785 start.go:293] postStartSetup for "no-preload-646844" (driver="kvm2")
	I1124 03:39:45.272856  221785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:39:45.272952  221785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:39:45.276413  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.276897  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:45.276923  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.277095  221785 sshutil.go:53] new ssh client: &{IP:192.168.72.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/no-preload-646844/id_rsa Username:docker}
	I1124 03:39:45.373517  221785 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:39:45.378394  221785 info.go:137] Remote host: Buildroot 2025.02
	I1124 03:39:45.378425  221785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/addons for local assets ...
	I1124 03:39:45.378518  221785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/files for local assets ...
	I1124 03:39:45.378653  221785 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem -> 1897492.pem in /etc/ssl/certs
	I1124 03:39:45.378783  221785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:39:45.392225  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem --> /etc/ssl/certs/1897492.pem (1708 bytes)
	I1124 03:39:45.426396  221785 start.go:296] duration metric: took 153.53555ms for postStartSetup
	I1124 03:39:45.426448  221785 fix.go:56] duration metric: took 14.847103842s for fixHost
	I1124 03:39:45.429282  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.429711  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:45.429743  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.429926  221785 main.go:143] libmachine: Using SSH client type: native
	I1124 03:39:45.430244  221785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.5 22 <nil> <nil>}
	I1124 03:39:45.430259  221785 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 03:39:45.549471  221785 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763955585.490835722
	
	I1124 03:39:45.549507  221785 fix.go:216] guest clock: 1763955585.490835722
	I1124 03:39:45.549517  221785 fix.go:229] Guest: 2025-11-24 03:39:45.490835722 +0000 UTC Remote: 2025-11-24 03:39:45.426453659 +0000 UTC m=+18.628031751 (delta=64.382063ms)
	I1124 03:39:45.549542  221785 fix.go:200] guest clock delta is within tolerance: 64.382063ms
	I1124 03:39:45.549549  221785 start.go:83] releasing machines lock for "no-preload-646844", held for 14.970237341s
	I1124 03:39:45.553296  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.553726  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:45.553750  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.554311  221785 ssh_runner.go:195] Run: cat /version.json
	I1124 03:39:45.554412  221785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:39:45.558753  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.558940  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.559367  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:45.559423  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.559504  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:45.559540  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:45.559835  221785 sshutil.go:53] new ssh client: &{IP:192.168.72.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/no-preload-646844/id_rsa Username:docker}
	I1124 03:39:45.560356  221785 sshutil.go:53] new ssh client: &{IP:192.168.72.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/no-preload-646844/id_rsa Username:docker}
	I1124 03:39:45.646338  221785 ssh_runner.go:195] Run: systemctl --version
	I1124 03:39:45.670784  221785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:39:45.823485  221785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:39:45.832246  221785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:39:45.832329  221785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:39:45.856480  221785 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:39:45.856506  221785 start.go:496] detecting cgroup driver to use...
	I1124 03:39:45.856588  221785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:39:45.878425  221785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:39:45.896658  221785 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:39:45.896736  221785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:39:45.916511  221785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:39:45.934883  221785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:39:46.100256  221785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:39:46.342107  221785 docker.go:234] disabling docker service ...
	I1124 03:39:46.342194  221785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:39:46.359265  221785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:39:46.374797  221785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:39:46.551835  221785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:39:46.727178  221785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:39:46.743979  221785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:39:46.773138  221785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:39:46.773228  221785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:39:46.786612  221785 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 03:39:46.786687  221785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:39:46.802205  221785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:39:46.814779  221785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:39:46.831862  221785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:39:46.846227  221785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:39:46.861909  221785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:39:46.887161  221785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:39:46.900616  221785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:39:46.917001  221785 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 03:39:46.917068  221785 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 03:39:46.944619  221785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:39:46.960566  221785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:39:47.123565  221785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:39:47.267113  221785 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:39:47.267214  221785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:39:47.272862  221785 start.go:564] Will wait 60s for crictl version
	I1124 03:39:47.272923  221785 ssh_runner.go:195] Run: which crictl
	I1124 03:39:47.277173  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 03:39:47.314062  221785 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 03:39:47.314175  221785 ssh_runner.go:195] Run: crio --version
	I1124 03:39:47.344295  221785 ssh_runner.go:195] Run: crio --version
	I1124 03:39:47.380416  221785 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1124 03:39:45.253191  221590 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.717207555s
	I1124 03:39:46.793715  221590 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.259612889s
	I1124 03:39:49.036609  221590 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.503950097s
	I1124 03:39:49.056063  221590 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:39:49.071907  221590 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:39:49.088920  221590 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:39:49.089192  221590 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-871319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:39:49.104565  221590 kubeadm.go:319] [bootstrap-token] Using token: hw08sd.u4lfa2j2k5ceahb7
	I1124 03:39:49.105541  221590 out.go:252]   - Configuring RBAC rules ...
	I1124 03:39:49.105694  221590 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:39:49.112125  221590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:39:49.126365  221590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:39:49.131773  221590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:39:49.135003  221590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:39:49.139563  221590 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:39:45.555521  221986 out.go:252] * Restarting existing kvm2 VM for "embed-certs-780317" ...
	I1124 03:39:45.555608  221986 main.go:143] libmachine: starting domain...
	I1124 03:39:45.555630  221986 main.go:143] libmachine: ensuring networks are active...
	I1124 03:39:45.557403  221986 main.go:143] libmachine: Ensuring network default is active
	I1124 03:39:45.558052  221986 main.go:143] libmachine: Ensuring network mk-embed-certs-780317 is active
	I1124 03:39:45.558705  221986 main.go:143] libmachine: getting domain XML...
	I1124 03:39:45.561130  221986 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>embed-certs-780317</name>
	  <uuid>9de4dd1f-0d92-4d2a-888b-436a0e4f8cdb</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21975-185833/.minikube/machines/embed-certs-780317/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21975-185833/.minikube/machines/embed-certs-780317/embed-certs-780317.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:71:a1:5d'/>
	      <source network='mk-embed-certs-780317'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:09:dc:ae'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1124 03:39:47.041829  221986 main.go:143] libmachine: waiting for domain to start...
	I1124 03:39:47.043287  221986 main.go:143] libmachine: domain is now running
	I1124 03:39:47.043309  221986 main.go:143] libmachine: waiting for IP...
	I1124 03:39:47.044134  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:39:47.044915  221986 main.go:143] libmachine: domain embed-certs-780317 has current primary IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:39:47.044932  221986 main.go:143] libmachine: found domain IP: 192.168.61.33
	I1124 03:39:47.044939  221986 main.go:143] libmachine: reserving static IP address...
	I1124 03:39:47.045551  221986 main.go:143] libmachine: found host DHCP lease matching {name: "embed-certs-780317", mac: "52:54:00:71:a1:5d", ip: "192.168.61.33"} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:36:47 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:39:47.045588  221986 main.go:143] libmachine: skip adding static IP to network mk-embed-certs-780317 - found existing host DHCP lease matching {name: "embed-certs-780317", mac: "52:54:00:71:a1:5d", ip: "192.168.61.33"}
	I1124 03:39:47.045603  221986 main.go:143] libmachine: reserved static IP address 192.168.61.33 for domain embed-certs-780317
	I1124 03:39:47.045613  221986 main.go:143] libmachine: waiting for SSH...
	I1124 03:39:47.045623  221986 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 03:39:47.048415  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:39:47.048887  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:36:47 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:39:47.048913  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:39:47.049111  221986 main.go:143] libmachine: Using SSH client type: native
	I1124 03:39:47.049357  221986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1124 03:39:47.049399  221986 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 03:39:44.928208  222063 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:39:44.928250  222063 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:39:44.928264  222063 cache.go:65] Caching tarball of preloaded images
	I1124 03:39:44.928364  222063 preload.go:238] Found /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:39:44.928402  222063 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:39:44.928592  222063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/config.json ...
	I1124 03:39:44.928825  222063 start.go:360] acquireMachinesLock for pause-338254: {Name:mk6edb9cd27540c3b670af896ffc377aa954769e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 03:39:49.446936  221590 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:39:49.954334  221590 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:39:50.443064  221590 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:39:50.444224  221590 kubeadm.go:319] 
	I1124 03:39:50.444361  221590 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:39:50.444396  221590 kubeadm.go:319] 
	I1124 03:39:50.444496  221590 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:39:50.444504  221590 kubeadm.go:319] 
	I1124 03:39:50.444526  221590 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:39:50.444601  221590 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:39:50.444681  221590 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:39:50.444691  221590 kubeadm.go:319] 
	I1124 03:39:50.444772  221590 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:39:50.444782  221590 kubeadm.go:319] 
	I1124 03:39:50.444838  221590 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:39:50.444844  221590 kubeadm.go:319] 
	I1124 03:39:50.444884  221590 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:39:50.444978  221590 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:39:50.445067  221590 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:39:50.445076  221590 kubeadm.go:319] 
	I1124 03:39:50.445167  221590 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:39:50.445304  221590 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:39:50.445321  221590 kubeadm.go:319] 
	I1124 03:39:50.445457  221590 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token hw08sd.u4lfa2j2k5ceahb7 \
	I1124 03:39:50.445618  221590 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:40f9f8d245e87dfcd676995f2f148799897721892812b70c22eda7d58a9ddc01 \
	I1124 03:39:50.445655  221590 kubeadm.go:319] 	--control-plane 
	I1124 03:39:50.445660  221590 kubeadm.go:319] 
	I1124 03:39:50.445796  221590 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:39:50.445809  221590 kubeadm.go:319] 
	I1124 03:39:50.445924  221590 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token hw08sd.u4lfa2j2k5ceahb7 \
	I1124 03:39:50.446069  221590 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:40f9f8d245e87dfcd676995f2f148799897721892812b70c22eda7d58a9ddc01 
	I1124 03:39:50.447628  221590 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:39:50.447662  221590 cni.go:84] Creating CNI manager for ""
	I1124 03:39:50.447681  221590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:39:50.449315  221590 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 03:39:47.384458  221785 main.go:143] libmachine: domain no-preload-646844 has defined MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:47.384849  221785 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:75:0d", ip: ""} in network mk-no-preload-646844: {Iface:virbr4 ExpiryTime:2025-11-24 04:39:42 +0000 UTC Type:0 Mac:52:54:00:15:75:0d Iaid: IPaddr:192.168.72.5 Prefix:24 Hostname:no-preload-646844 Clientid:01:52:54:00:15:75:0d}
	I1124 03:39:47.384888  221785 main.go:143] libmachine: domain no-preload-646844 has defined IP address 192.168.72.5 and MAC address 52:54:00:15:75:0d in network mk-no-preload-646844
	I1124 03:39:47.385081  221785 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1124 03:39:47.389380  221785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:39:47.403833  221785 kubeadm.go:884] updating cluster {Name:no-preload-646844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:no-preload-646844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.5 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:39:47.403975  221785 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:39:47.404034  221785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:39:47.436435  221785 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1124 03:39:47.436469  221785 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 03:39:47.436565  221785 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:39:47.436593  221785 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 03:39:47.436601  221785 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 03:39:47.436577  221785 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 03:39:47.436640  221785 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 03:39:47.436577  221785 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 03:39:47.436787  221785 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 03:39:47.436602  221785 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 03:39:47.438484  221785 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 03:39:47.438503  221785 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 03:39:47.438507  221785 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 03:39:47.438507  221785 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 03:39:47.438487  221785 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 03:39:47.438487  221785 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 03:39:47.438486  221785 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:39:47.438607  221785 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 03:39:47.577768  221785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1124 03:39:47.592676  221785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1124 03:39:47.599721  221785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 03:39:47.599899  221785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1124 03:39:47.602990  221785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1124 03:39:47.609642  221785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1124 03:39:47.609917  221785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1124 03:39:47.693883  221785 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1124 03:39:47.693943  221785 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 03:39:47.694004  221785 ssh_runner.go:195] Run: which crictl
	I1124 03:39:47.730010  221785 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1124 03:39:47.730082  221785 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1124 03:39:47.730145  221785 ssh_runner.go:195] Run: which crictl
	I1124 03:39:47.781075  221785 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1124 03:39:47.781098  221785 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1124 03:39:47.781133  221785 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 03:39:47.781145  221785 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 03:39:47.781193  221785 ssh_runner.go:195] Run: which crictl
	I1124 03:39:47.781196  221785 ssh_runner.go:195] Run: which crictl
	I1124 03:39:47.781262  221785 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1124 03:39:47.781312  221785 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 03:39:47.781389  221785 ssh_runner.go:195] Run: which crictl
	I1124 03:39:47.783779  221785 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1124 03:39:47.783814  221785 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 03:39:47.783853  221785 ssh_runner.go:195] Run: which crictl
	I1124 03:39:47.904881  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 03:39:47.904902  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 03:39:47.904967  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 03:39:47.904988  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 03:39:47.905073  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 03:39:47.905198  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 03:39:48.029547  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 03:39:48.029553  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 03:39:48.029553  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 03:39:48.040060  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 03:39:48.040095  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 03:39:48.040147  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 03:39:48.137974  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 03:39:48.138079  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 03:39:48.150691  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 03:39:48.172675  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 03:39:48.172868  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 03:39:48.173033  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 03:39:48.269455  221785 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 03:39:48.269505  221785 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 03:39:48.269595  221785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 03:39:48.269601  221785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1124 03:39:48.269732  221785 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 03:39:48.269833  221785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:48.287265  221785 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 03:39:48.287435  221785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:39:48.300182  221785 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 03:39:48.300243  221785 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.12.1 (exists)
	I1124 03:39:48.300278  221785 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1124 03:39:48.300295  221785 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.6.4-0 (exists)
	I1124 03:39:48.300304  221785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:39:48.300307  221785 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.34.1 (exists)
	I1124 03:39:48.300325  221785 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1124 03:39:48.300182  221785 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 03:39:48.300442  221785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 03:39:48.303535  221785 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.34.1 (exists)
	I1124 03:39:48.308578  221785 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.34.1 (exists)
	I1124 03:39:48.829503  221785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:39:50.528540  221785 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.228185984s)
	I1124 03:39:50.528574  221785 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1124 03:39:50.528575  221785 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.228112519s)
	I1124 03:39:50.528598  221785 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.34.1 (exists)
	I1124 03:39:50.528601  221785 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:50.528626  221785 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.699097393s)
	I1124 03:39:50.528656  221785 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 03:39:50.528691  221785 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:39:50.528660  221785 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:50.528729  221785 ssh_runner.go:195] Run: which crictl
	I1124 03:39:50.450447  221590 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 03:39:50.465141  221590 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 03:39:50.494578  221590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:39:50.494636  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:50.494649  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-871319 minikube.k8s.io/updated_at=2025_11_24T03_39_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=default-k8s-diff-port-871319 minikube.k8s.io/primary=true
	I1124 03:39:50.554460  221590 ops.go:34] apiserver oom_adj: -16
	I1124 03:39:50.730930  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:51.231272  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:51.731808  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:52.231900  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:52.731946  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:53.231952  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:53.732059  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:54.231114  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:50.151665  221986 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: no route to host
	I1124 03:39:54.731159  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:55.231040  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:55.732059  221590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:39:55.912213  221590 kubeadm.go:1114] duration metric: took 5.417637366s to wait for elevateKubeSystemPrivileges
	I1124 03:39:55.912269  221590 kubeadm.go:403] duration metric: took 18.768530566s to StartCluster
	I1124 03:39:55.912297  221590 settings.go:142] acquiring lock: {Name:mk66e7c24245b8d0d5ec4dc3d788350fb3f2b31a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:39:55.912412  221590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 03:39:55.913819  221590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/kubeconfig: {Name:mkcda9156e9d84203343cbeb8993f30147e2224f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:39:55.914120  221590 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.42 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:39:55.914303  221590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:39:55.914625  221590 config.go:182] Loaded profile config "default-k8s-diff-port-871319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:39:55.914639  221590 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:39:55.914767  221590 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-871319"
	I1124 03:39:55.914788  221590 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-871319"
	I1124 03:39:55.914832  221590 host.go:66] Checking if "default-k8s-diff-port-871319" exists ...
	I1124 03:39:55.914853  221590 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-871319"
	I1124 03:39:55.914880  221590 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-871319"
	I1124 03:39:55.915624  221590 out.go:179] * Verifying Kubernetes components...
	I1124 03:39:55.916904  221590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:39:55.918993  221590 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-871319"
	I1124 03:39:55.919028  221590 host.go:66] Checking if "default-k8s-diff-port-871319" exists ...
	I1124 03:39:55.919159  221590 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:39:55.435702  221785 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.906975868s)
	I1124 03:39:55.435749  221785 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:39:55.435780  221785 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 03:39:55.435720  221785 ssh_runner.go:235] Completed: which crictl: (4.906969354s)
	I1124 03:39:55.435835  221785 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 03:39:55.435864  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:39:55.920191  221590 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:39:55.920220  221590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:39:55.921153  221590 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:39:55.921173  221590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:39:55.924978  221590 main.go:143] libmachine: domain default-k8s-diff-port-871319 has defined MAC address 52:54:00:78:8e:1c in network mk-default-k8s-diff-port-871319
	I1124 03:39:55.925598  221590 main.go:143] libmachine: domain default-k8s-diff-port-871319 has defined MAC address 52:54:00:78:8e:1c in network mk-default-k8s-diff-port-871319
	I1124 03:39:55.925602  221590 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:8e:1c", ip: ""} in network mk-default-k8s-diff-port-871319: {Iface:virbr5 ExpiryTime:2025-11-24 04:39:28 +0000 UTC Type:0 Mac:52:54:00:78:8e:1c Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:default-k8s-diff-port-871319 Clientid:01:52:54:00:78:8e:1c}
	I1124 03:39:55.925647  221590 main.go:143] libmachine: domain default-k8s-diff-port-871319 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:8e:1c in network mk-default-k8s-diff-port-871319
	I1124 03:39:55.926037  221590 sshutil.go:53] new ssh client: &{IP:192.168.83.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/default-k8s-diff-port-871319/id_rsa Username:docker}
	I1124 03:39:55.926168  221590 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:8e:1c", ip: ""} in network mk-default-k8s-diff-port-871319: {Iface:virbr5 ExpiryTime:2025-11-24 04:39:28 +0000 UTC Type:0 Mac:52:54:00:78:8e:1c Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:default-k8s-diff-port-871319 Clientid:01:52:54:00:78:8e:1c}
	I1124 03:39:55.926207  221590 main.go:143] libmachine: domain default-k8s-diff-port-871319 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:8e:1c in network mk-default-k8s-diff-port-871319
	I1124 03:39:55.926510  221590 sshutil.go:53] new ssh client: &{IP:192.168.83.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/default-k8s-diff-port-871319/id_rsa Username:docker}
	I1124 03:39:56.244349  221590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:39:56.403530  221590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:39:56.674969  221590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:39:56.766042  221590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:39:57.433520  221590 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.029933521s)
	I1124 03:39:57.433715  221590 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.189299981s)
	I1124 03:39:57.433744  221590 start.go:977] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1124 03:39:57.434864  221590 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-871319" to be "Ready" ...
	I1124 03:39:57.460270  221590 node_ready.go:49] node "default-k8s-diff-port-871319" is "Ready"
	I1124 03:39:57.460312  221590 node_ready.go:38] duration metric: took 25.417398ms for node "default-k8s-diff-port-871319" to be "Ready" ...
	I1124 03:39:57.460333  221590 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:39:57.460416  221590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:39:58.029274  221590 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-871319" context rescaled to 1 replicas
	I1124 03:39:58.190367  221590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.515348572s)
	I1124 03:39:58.190495  221590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.424416599s)
	I1124 03:39:58.190844  221590 api_server.go:72] duration metric: took 2.276679586s to wait for apiserver process to appear ...
	I1124 03:39:58.190866  221590 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:39:58.190890  221590 api_server.go:253] Checking apiserver healthz at https://192.168.83.42:8444/healthz ...
	I1124 03:39:58.219934  221590 api_server.go:279] https://192.168.83.42:8444/healthz returned 200:
	ok
	I1124 03:39:58.225362  221590 api_server.go:141] control plane version: v1.34.1
	I1124 03:39:58.225402  221590 api_server.go:131] duration metric: took 34.5299ms to wait for apiserver health ...
	I1124 03:39:58.225412  221590 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:39:58.235500  221590 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:39:58.236729  221590 addons.go:530] duration metric: took 2.322089891s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:39:58.244895  221590 system_pods.go:59] 8 kube-system pods found
	I1124 03:39:58.244952  221590 system_pods.go:61] "coredns-66bc5c9577-2dszq" [0c460675-43ae-49eb-a59c-cb91c29f7278] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:39:58.244967  221590 system_pods.go:61] "coredns-66bc5c9577-wsbv8" [811b517e-4378-4e18-a73c-1190070e9925] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:39:58.244983  221590 system_pods.go:61] "etcd-default-k8s-diff-port-871319" [10706f4e-fbc9-4c9c-b80a-485dc922c2ba] Running
	I1124 03:39:58.245001  221590 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871319" [b734329d-1ca8-42ae-b348-9bccbe4073e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:39:58.245014  221590 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871319" [afa72b2a-67a8-47e5-a40c-22dd861ba63b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:39:58.245021  221590 system_pods.go:61] "kube-proxy-mb98n" [bcaa90f1-a8fa-4dcb-8bbb-f1c692f1eb3f] Running
	I1124 03:39:58.245037  221590 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871319" [9527e8c9-bba7-4492-b94f-8b16897e5a42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:39:58.245049  221590 system_pods.go:61] "storage-provisioner" [3d7dd5c9-b3c5-46f6-a9d4-bf52b95c4a8a] Pending
	I1124 03:39:58.245058  221590 system_pods.go:74] duration metric: took 19.638858ms to wait for pod list to return data ...
	I1124 03:39:58.245069  221590 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:39:58.252866  221590 default_sa.go:45] found service account: "default"
	I1124 03:39:58.252893  221590 default_sa.go:55] duration metric: took 7.816886ms for default service account to be created ...
	I1124 03:39:58.252905  221590 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:39:58.256565  221590 system_pods.go:86] 8 kube-system pods found
	I1124 03:39:58.256599  221590 system_pods.go:89] "coredns-66bc5c9577-2dszq" [0c460675-43ae-49eb-a59c-cb91c29f7278] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:39:58.256608  221590 system_pods.go:89] "coredns-66bc5c9577-wsbv8" [811b517e-4378-4e18-a73c-1190070e9925] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:39:58.256615  221590 system_pods.go:89] "etcd-default-k8s-diff-port-871319" [10706f4e-fbc9-4c9c-b80a-485dc922c2ba] Running
	I1124 03:39:58.256624  221590 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-871319" [b734329d-1ca8-42ae-b348-9bccbe4073e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:39:58.256631  221590 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-871319" [afa72b2a-67a8-47e5-a40c-22dd861ba63b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:39:58.256635  221590 system_pods.go:89] "kube-proxy-mb98n" [bcaa90f1-a8fa-4dcb-8bbb-f1c692f1eb3f] Running
	I1124 03:39:58.256641  221590 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-871319" [9527e8c9-bba7-4492-b94f-8b16897e5a42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:39:58.256645  221590 system_pods.go:89] "storage-provisioner" [3d7dd5c9-b3c5-46f6-a9d4-bf52b95c4a8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:39:58.256653  221590 system_pods.go:126] duration metric: took 3.74211ms to wait for k8s-apps to be running ...
	I1124 03:39:58.256660  221590 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:39:58.256706  221590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:39:58.278279  221590 system_svc.go:56] duration metric: took 21.604243ms WaitForService to wait for kubelet
	I1124 03:39:58.278339  221590 kubeadm.go:587] duration metric: took 2.36417903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:39:58.278395  221590 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:39:58.282187  221590 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 03:39:58.282216  221590 node_conditions.go:123] node cpu capacity is 2
	I1124 03:39:58.282238  221590 node_conditions.go:105] duration metric: took 3.835768ms to run NodePressure ...
	I1124 03:39:58.282250  221590 start.go:242] waiting for startup goroutines ...
	I1124 03:39:58.282260  221590 start.go:247] waiting for cluster config update ...
	I1124 03:39:58.282275  221590 start.go:256] writing updated cluster config ...
	I1124 03:39:58.282620  221590 ssh_runner.go:195] Run: rm -f paused
	I1124 03:39:58.289088  221590 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:39:58.294227  221590 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2dszq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:39:56.231687  221986 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: no route to host
	I1124 03:39:59.233868  221986 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: connection refused
	I1124 03:39:58.165021  221785 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.729136876s)
	I1124 03:39:58.165052  221785 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.729162324s)
	I1124 03:39:58.165063  221785 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 03:39:58.165108  221785 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:39:58.165158  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:39:58.165175  221785 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:40:00.262842  221785 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.097638773s)
	I1124 03:40:00.262875  221785 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.097691104s)
	I1124 03:40:00.262885  221785 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 03:40:00.262962  221785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:40:00.263021  221785 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:40:00.263073  221785 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:40:01.730986  221785 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.467981899s)
	I1124 03:40:01.731051  221785 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 03:40:01.730987  221785 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.467869199s)
	I1124 03:40:01.731170  221785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:40:01.731174  221785 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 03:40:01.731270  221785 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 03:40:01.731302  221785 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 03:40:01.738414  221785 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1124 03:40:03.512337  222063 start.go:364] duration metric: took 18.583431842s to acquireMachinesLock for "pause-338254"
	I1124 03:40:03.512425  222063 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:40:03.512438  222063 fix.go:54] fixHost starting: 
	I1124 03:40:03.514902  222063 fix.go:112] recreateIfNeeded on pause-338254: state=Running err=<nil>
	W1124 03:40:03.514944  222063 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:39:59.302797  221590 pod_ready.go:94] pod "coredns-66bc5c9577-2dszq" is "Ready"
	I1124 03:39:59.302837  221590 pod_ready.go:86] duration metric: took 1.008579698s for pod "coredns-66bc5c9577-2dszq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:39:59.302852  221590 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wsbv8" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:40:01.309841  221590 pod_ready.go:104] pod "coredns-66bc5c9577-wsbv8" is not "Ready", error: <nil>
	W1124 03:40:03.809688  221590 pod_ready.go:104] pod "coredns-66bc5c9577-wsbv8" is not "Ready", error: <nil>
	I1124 03:40:02.340260  221986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:40:02.344102  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.344556  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:02.344581  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.344864  221986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/embed-certs-780317/config.json ...
	I1124 03:40:02.345096  221986 machine.go:94] provisionDockerMachine start ...
	I1124 03:40:02.347407  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.347868  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:02.347893  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.348049  221986 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:02.348276  221986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1124 03:40:02.348289  221986 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:40:02.452479  221986 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 03:40:02.452529  221986 buildroot.go:166] provisioning hostname "embed-certs-780317"
	I1124 03:40:02.455739  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.456078  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:02.456118  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.456318  221986 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:02.456546  221986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1124 03:40:02.456558  221986 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-780317 && echo "embed-certs-780317" | sudo tee /etc/hostname
	I1124 03:40:02.584532  221986 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-780317
	
	I1124 03:40:02.588059  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.588436  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:02.588480  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.588707  221986 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:02.588902  221986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1124 03:40:02.588915  221986 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-780317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-780317/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-780317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:40:02.706831  221986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:40:02.706865  221986 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21975-185833/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-185833/.minikube}
	I1124 03:40:02.706909  221986 buildroot.go:174] setting up certificates
	I1124 03:40:02.706935  221986 provision.go:84] configureAuth start
	I1124 03:40:02.710560  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.711077  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:02.711125  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.714381  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.714828  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:02.714854  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.715046  221986 provision.go:143] copyHostCerts
	I1124 03:40:02.715115  221986 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem, removing ...
	I1124 03:40:02.715140  221986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem
	I1124 03:40:02.715234  221986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem (1078 bytes)
	I1124 03:40:02.715329  221986 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem, removing ...
	I1124 03:40:02.715338  221986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem
	I1124 03:40:02.715367  221986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem (1123 bytes)
	I1124 03:40:02.715449  221986 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem, removing ...
	I1124 03:40:02.715458  221986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem
	I1124 03:40:02.715487  221986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem (1675 bytes)
	I1124 03:40:02.715591  221986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem org=jenkins.embed-certs-780317 san=[127.0.0.1 192.168.61.33 embed-certs-780317 localhost minikube]
	I1124 03:40:02.835176  221986 provision.go:177] copyRemoteCerts
	I1124 03:40:02.835251  221986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:40:02.838303  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.838772  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:02.838798  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:02.838961  221986 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/embed-certs-780317/id_rsa Username:docker}
	I1124 03:40:02.920762  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:40:02.951140  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 03:40:02.980449  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:40:03.011179  221986 provision.go:87] duration metric: took 304.222606ms to configureAuth
	I1124 03:40:03.011215  221986 buildroot.go:189] setting minikube options for container-runtime
	I1124 03:40:03.011483  221986 config.go:182] Loaded profile config "embed-certs-780317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:40:03.014571  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.015028  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:03.015081  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.015318  221986 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.015611  221986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1124 03:40:03.015641  221986 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:40:03.258230  221986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:40:03.258285  221986 machine.go:97] duration metric: took 913.162989ms to provisionDockerMachine
	I1124 03:40:03.258304  221986 start.go:293] postStartSetup for "embed-certs-780317" (driver="kvm2")
	I1124 03:40:03.258318  221986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:40:03.258441  221986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:40:03.261671  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.262171  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:03.262205  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.262419  221986 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/embed-certs-780317/id_rsa Username:docker}
	I1124 03:40:03.344994  221986 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:40:03.349786  221986 info.go:137] Remote host: Buildroot 2025.02
	I1124 03:40:03.349822  221986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/addons for local assets ...
	I1124 03:40:03.349895  221986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/files for local assets ...
	I1124 03:40:03.349989  221986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem -> 1897492.pem in /etc/ssl/certs
	I1124 03:40:03.350119  221986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:40:03.362811  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem --> /etc/ssl/certs/1897492.pem (1708 bytes)
	I1124 03:40:03.396283  221986 start.go:296] duration metric: took 137.947581ms for postStartSetup
	I1124 03:40:03.396344  221986 fix.go:56] duration metric: took 17.846608954s for fixHost
	I1124 03:40:03.399613  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.400038  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:03.400077  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.400262  221986 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.400566  221986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1124 03:40:03.400583  221986 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 03:40:03.512085  221986 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763955603.481690666
	
	I1124 03:40:03.512117  221986 fix.go:216] guest clock: 1763955603.481690666
	I1124 03:40:03.512128  221986 fix.go:229] Guest: 2025-11-24 03:40:03.481690666 +0000 UTC Remote: 2025-11-24 03:40:03.396351283 +0000 UTC m=+28.716156441 (delta=85.339383ms)
	I1124 03:40:03.512154  221986 fix.go:200] guest clock delta is within tolerance: 85.339383ms
	I1124 03:40:03.512163  221986 start.go:83] releasing machines lock for "embed-certs-780317", held for 17.962459289s
	I1124 03:40:03.516016  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.516510  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:03.516547  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.517216  221986 ssh_runner.go:195] Run: cat /version.json
	I1124 03:40:03.517305  221986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:40:03.521357  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.521558  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.521829  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:03.521862  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.522061  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:03.522093  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:03.522084  221986 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/embed-certs-780317/id_rsa Username:docker}
	I1124 03:40:03.522429  221986 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/embed-certs-780317/id_rsa Username:docker}
	I1124 03:40:03.638473  221986 ssh_runner.go:195] Run: systemctl --version
	I1124 03:40:03.647140  221986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:40:03.804285  221986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:40:03.813067  221986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:40:03.813142  221986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:40:03.834095  221986 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:40:03.834125  221986 start.go:496] detecting cgroup driver to use...
	I1124 03:40:03.834202  221986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:40:03.860826  221986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:40:03.877876  221986 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:40:03.877950  221986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:40:03.899191  221986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:40:03.916306  221986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:40:04.105528  221986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:40:04.362555  221986 docker.go:234] disabling docker service ...
	I1124 03:40:04.362642  221986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:40:04.387651  221986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:40:04.406137  221986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:40:04.590580  221986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:40:03.516608  222063 out.go:252] * Updating the running kvm2 "pause-338254" VM ...
	I1124 03:40:03.516645  222063 machine.go:94] provisionDockerMachine start ...
	I1124 03:40:03.521237  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.521849  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.521881  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.522086  222063 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.522445  222063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1124 03:40:03.522458  222063 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:40:03.634652  222063 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-338254
	
	I1124 03:40:03.634700  222063 buildroot.go:166] provisioning hostname "pause-338254"
	I1124 03:40:03.638433  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.638937  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.638968  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.639164  222063 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.639482  222063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1124 03:40:03.639503  222063 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-338254 && echo "pause-338254" | sudo tee /etc/hostname
	I1124 03:40:03.776003  222063 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-338254
	
	I1124 03:40:03.779278  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.779751  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.779779  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.779976  222063 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.780256  222063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1124 03:40:03.780275  222063 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-338254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-338254/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-338254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:40:03.889655  222063 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:40:03.889689  222063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21975-185833/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-185833/.minikube}
	I1124 03:40:03.889739  222063 buildroot.go:174] setting up certificates
	I1124 03:40:03.889754  222063 provision.go:84] configureAuth start
	I1124 03:40:03.894105  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.894686  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.894727  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.898794  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.899599  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.899649  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.899841  222063 provision.go:143] copyHostCerts
	I1124 03:40:03.899924  222063 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem, removing ...
	I1124 03:40:03.899945  222063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem
	I1124 03:40:03.900023  222063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/ca.pem (1078 bytes)
	I1124 03:40:03.900191  222063 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem, removing ...
	I1124 03:40:03.900208  222063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem
	I1124 03:40:03.900264  222063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/cert.pem (1123 bytes)
	I1124 03:40:03.900350  222063 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem, removing ...
	I1124 03:40:03.900360  222063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem
	I1124 03:40:03.900421  222063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-185833/.minikube/key.pem (1675 bytes)
	I1124 03:40:03.900503  222063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem org=jenkins.pause-338254 san=[127.0.0.1 192.168.39.187 localhost minikube pause-338254]
	I1124 03:40:03.983993  222063 provision.go:177] copyRemoteCerts
	I1124 03:40:03.984088  222063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:40:03.987664  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.988301  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:03.988341  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:03.988549  222063 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/pause-338254/id_rsa Username:docker}
	I1124 03:40:04.079313  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:40:04.115013  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 03:40:04.152539  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:40:04.193229  222063 provision.go:87] duration metric: took 303.45674ms to configureAuth
	I1124 03:40:04.193269  222063 buildroot.go:189] setting minikube options for container-runtime
	I1124 03:40:04.193570  222063 config.go:182] Loaded profile config "pause-338254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:40:04.197162  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:04.197668  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:04.197704  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:04.197955  222063 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:04.198285  222063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1124 03:40:04.198314  222063 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:40:04.750931  221986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:40:04.768461  221986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:40:04.792865  221986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:40:04.792944  221986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:04.806725  221986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 03:40:04.806798  221986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:04.822441  221986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:04.838813  221986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:04.854628  221986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:40:04.869765  221986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:04.883307  221986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:04.905552  221986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:04.919258  221986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:40:04.930188  221986 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 03:40:04.930274  221986 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 03:40:04.953435  221986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:40:04.965323  221986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:05.125300  221986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:40:05.265902  221986 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:40:05.265982  221986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:40:05.272833  221986 start.go:564] Will wait 60s for crictl version
	I1124 03:40:05.272899  221986 ssh_runner.go:195] Run: which crictl
	I1124 03:40:05.277904  221986 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 03:40:05.316367  221986 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 03:40:05.316485  221986 ssh_runner.go:195] Run: crio --version
	I1124 03:40:05.346492  221986 ssh_runner.go:195] Run: crio --version
	I1124 03:40:05.385471  221986 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1124 03:40:03.089496  221785 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.358161967s)
	I1124 03:40:03.089539  221785 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1124 03:40:03.089577  221785 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:40:03.089640  221785 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:40:03.835512  221785 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-185833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:40:03.835565  221785 cache_images.go:125] Successfully loaded all cached images
	I1124 03:40:03.835574  221785 cache_images.go:94] duration metric: took 16.399091401s to LoadCachedImages
	I1124 03:40:03.835590  221785 kubeadm.go:935] updating node { 192.168.72.5 8443 v1.34.1 crio true true} ...
	I1124 03:40:03.835734  221785 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-646844 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-646844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:40:03.835822  221785 ssh_runner.go:195] Run: crio config
	I1124 03:40:03.891031  221785 cni.go:84] Creating CNI manager for ""
	I1124 03:40:03.891066  221785 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:40:03.891090  221785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:40:03.891123  221785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.5 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-646844 NodeName:no-preload-646844 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:40:03.891306  221785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-646844"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:40:03.891406  221785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:40:03.906340  221785 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:40:03.906435  221785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:03.921159  221785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1124 03:40:03.946060  221785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:03.967938  221785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 03:40:03.990931  221785 ssh_runner.go:195] Run: grep 192.168.72.5	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:03.995445  221785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:04.010233  221785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:04.176676  221785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:04.202831  221785 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844 for IP: 192.168.72.5
	I1124 03:40:04.202853  221785 certs.go:195] generating shared ca certs ...
	I1124 03:40:04.202876  221785 certs.go:227] acquiring lock for ca certs: {Name:mk173959192d8348177ca5710cbe68cc42fae47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:04.203068  221785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key
	I1124 03:40:04.203153  221785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key
	I1124 03:40:04.203169  221785 certs.go:257] generating profile certs ...
	I1124 03:40:04.203290  221785 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.key
	I1124 03:40:04.203356  221785 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/apiserver.key.4a096ee2
	I1124 03:40:04.203415  221785 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/proxy-client.key
	I1124 03:40:04.203532  221785 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749.pem (1338 bytes)
	W1124 03:40:04.203562  221785 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:04.203571  221785 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 03:40:04.203601  221785 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:04.203627  221785 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:04.203651  221785 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:04.203690  221785 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem (1708 bytes)
	I1124 03:40:04.204368  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:04.253927  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 03:40:04.304995  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:04.351899  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:04.400681  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:40:04.446883  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:40:04.484384  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:04.517698  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:04.549520  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem --> /usr/share/ca-certificates/1897492.pem (1708 bytes)
	I1124 03:40:04.585561  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:04.620950  221785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749.pem --> /usr/share/ca-certificates/189749.pem (1338 bytes)
	I1124 03:40:04.650945  221785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:04.672666  221785 ssh_runner.go:195] Run: openssl version
	I1124 03:40:04.679607  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1897492.pem && ln -fs /usr/share/ca-certificates/1897492.pem /etc/ssl/certs/1897492.pem"
	I1124 03:40:04.693413  221785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1897492.pem
	I1124 03:40:04.699014  221785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:47 /usr/share/ca-certificates/1897492.pem
	I1124 03:40:04.699104  221785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1897492.pem
	I1124 03:40:04.707312  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1897492.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:04.721165  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:04.735784  221785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:04.741349  221785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:39 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:04.741429  221785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:04.749025  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:04.763681  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/189749.pem && ln -fs /usr/share/ca-certificates/189749.pem /etc/ssl/certs/189749.pem"
	I1124 03:40:04.781173  221785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/189749.pem
	I1124 03:40:04.786966  221785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:47 /usr/share/ca-certificates/189749.pem
	I1124 03:40:04.787072  221785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/189749.pem
	I1124 03:40:04.794866  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/189749.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:04.809810  221785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:04.815759  221785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:40:04.824441  221785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:40:04.834215  221785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:40:04.842747  221785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:40:04.851261  221785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:40:04.860185  221785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:40:04.868720  221785 kubeadm.go:401] StartCluster: {Name:no-preload-646844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:no-preload-646844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.5 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:04.868817  221785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:04.868889  221785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:04.904939  221785 cri.go:89] found id: ""
	I1124 03:40:04.905037  221785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:40:04.917800  221785 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:40:04.917827  221785 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:40:04.917893  221785 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:40:04.930740  221785 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:40:04.931592  221785 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-646844" does not appear in /home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 03:40:04.931910  221785 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-185833/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-646844" cluster setting kubeconfig missing "no-preload-646844" context setting]
	I1124 03:40:04.932521  221785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/kubeconfig: {Name:mkcda9156e9d84203343cbeb8993f30147e2224f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:04.933947  221785 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:40:04.945075  221785 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.72.5
	I1124 03:40:04.945120  221785 kubeadm.go:1161] stopping kube-system containers ...
	I1124 03:40:04.945138  221785 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 03:40:04.945224  221785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:04.980510  221785 cri.go:89] found id: ""
	I1124 03:40:04.980605  221785 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 03:40:05.004213  221785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:40:05.021036  221785 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:40:05.021065  221785 kubeadm.go:158] found existing configuration files:
	
	I1124 03:40:05.021138  221785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:40:05.032148  221785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:40:05.032223  221785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:40:05.043713  221785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:40:05.054968  221785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:40:05.055047  221785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:40:05.066956  221785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:40:05.078224  221785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:40:05.078287  221785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:40:05.093072  221785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:40:05.106790  221785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:40:05.106866  221785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:40:05.118196  221785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:40:05.129738  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:40:05.327139  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	W1124 03:40:05.810344  221590 pod_ready.go:104] pod "coredns-66bc5c9577-wsbv8" is not "Ready", error: <nil>
	W1124 03:40:07.811224  221590 pod_ready.go:104] pod "coredns-66bc5c9577-wsbv8" is not "Ready", error: <nil>
	I1124 03:40:08.307522  221590 pod_ready.go:99] pod "coredns-66bc5c9577-wsbv8" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-wsbv8" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-wsbv8" not found
	I1124 03:40:08.307562  221590 pod_ready.go:86] duration metric: took 9.004699551s for pod "coredns-66bc5c9577-wsbv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:08.310832  221590 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-871319" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:08.317038  221590 pod_ready.go:94] pod "etcd-default-k8s-diff-port-871319" is "Ready"
	I1124 03:40:08.317072  221590 pod_ready.go:86] duration metric: took 6.204238ms for pod "etcd-default-k8s-diff-port-871319" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:08.319307  221590 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-871319" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:08.326021  221590 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-871319" is "Ready"
	I1124 03:40:08.326048  221590 pod_ready.go:86] duration metric: took 6.711694ms for pod "kube-apiserver-default-k8s-diff-port-871319" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:08.329111  221590 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-871319" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:08.335121  221590 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-871319" is "Ready"
	I1124 03:40:08.335144  221590 pod_ready.go:86] duration metric: took 6.002679ms for pod "kube-controller-manager-default-k8s-diff-port-871319" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:08.508365  221590 pod_ready.go:83] waiting for pod "kube-proxy-mb98n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:08.908122  221590 pod_ready.go:94] pod "kube-proxy-mb98n" is "Ready"
	I1124 03:40:08.908156  221590 pod_ready.go:86] duration metric: took 399.739675ms for pod "kube-proxy-mb98n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:09.129411  221590 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-871319" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:09.509392  221590 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-871319" is "Ready"
	I1124 03:40:09.509437  221590 pod_ready.go:86] duration metric: took 379.998056ms for pod "kube-scheduler-default-k8s-diff-port-871319" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:09.509457  221590 pod_ready.go:40] duration metric: took 11.220328241s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:09.583356  221590 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:40:09.585868  221590 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-871319" cluster and "default" namespace by default
	I1124 03:40:05.389413  221986 main.go:143] libmachine: domain embed-certs-780317 has defined MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:05.389838  221986 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:a1:5d", ip: ""} in network mk-embed-certs-780317: {Iface:virbr3 ExpiryTime:2025-11-24 04:39:58 +0000 UTC Type:0 Mac:52:54:00:71:a1:5d Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:embed-certs-780317 Clientid:01:52:54:00:71:a1:5d}
	I1124 03:40:05.389885  221986 main.go:143] libmachine: domain embed-certs-780317 has defined IP address 192.168.61.33 and MAC address 52:54:00:71:a1:5d in network mk-embed-certs-780317
	I1124 03:40:05.390115  221986 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1124 03:40:05.394656  221986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:05.408956  221986 kubeadm.go:884] updating cluster {Name:embed-certs-780317 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:embed-certs-780317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeR
equested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:40:05.409106  221986 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:40:05.409163  221986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:05.441778  221986 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1124 03:40:05.441859  221986 ssh_runner.go:195] Run: which lz4
	I1124 03:40:05.446096  221986 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 03:40:05.450724  221986 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1124 03:40:05.450757  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1124 03:40:08.168934  221986 crio.go:462] duration metric: took 2.722900412s to copy over tarball
	I1124 03:40:08.169562  221986 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 03:40:07.950861  221785 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.623667057s)
	I1124 03:40:07.950955  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:40:08.314112  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:40:08.379188  221785 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:40:08.472816  221785 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:40:08.472913  221785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:08.973283  221785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:09.473174  221785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:09.973073  221785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:10.009442  221785 api_server.go:72] duration metric: took 1.536630289s to wait for apiserver process to appear ...
	I1124 03:40:10.009488  221785 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:40:10.009516  221785 api_server.go:253] Checking apiserver healthz at https://192.168.72.5:8443/healthz ...
	I1124 03:40:09.943926  222063 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:40:09.944018  222063 machine.go:97] duration metric: took 6.427361082s to provisionDockerMachine
	I1124 03:40:09.944041  222063 start.go:293] postStartSetup for "pause-338254" (driver="kvm2")
	I1124 03:40:09.944127  222063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:40:09.944231  222063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:40:09.947662  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:09.948208  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:09.948249  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:09.948440  222063 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/pause-338254/id_rsa Username:docker}
	I1124 03:40:10.041119  222063 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:40:10.047558  222063 info.go:137] Remote host: Buildroot 2025.02
	I1124 03:40:10.047595  222063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/addons for local assets ...
	I1124 03:40:10.047678  222063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-185833/.minikube/files for local assets ...
	I1124 03:40:10.047776  222063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem -> 1897492.pem in /etc/ssl/certs
	I1124 03:40:10.047918  222063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:40:10.064415  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem --> /etc/ssl/certs/1897492.pem (1708 bytes)
	I1124 03:40:10.099554  222063 start.go:296] duration metric: took 155.491179ms for postStartSetup
	I1124 03:40:10.099624  222063 fix.go:56] duration metric: took 6.587166942s for fixHost
	I1124 03:40:10.102980  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.103506  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:10.103556  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.103774  222063 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:10.104073  222063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1124 03:40:10.104085  222063 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 03:40:10.223487  222063 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763955610.219281221
	
	I1124 03:40:10.223543  222063 fix.go:216] guest clock: 1763955610.219281221
	I1124 03:40:10.223556  222063 fix.go:229] Guest: 2025-11-24 03:40:10.219281221 +0000 UTC Remote: 2025-11-24 03:40:10.099629369 +0000 UTC m=+25.294421385 (delta=119.651852ms)
	I1124 03:40:10.223583  222063 fix.go:200] guest clock delta is within tolerance: 119.651852ms
	I1124 03:40:10.223591  222063 start.go:83] releasing machines lock for "pause-338254", held for 6.711196524s
	I1124 03:40:10.227781  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.228322  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:10.228359  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.229330  222063 ssh_runner.go:195] Run: cat /version.json
	I1124 03:40:10.229610  222063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:40:10.234293  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.234644  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.235132  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:10.235168  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.235409  222063 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/pause-338254/id_rsa Username:docker}
	I1124 03:40:10.235978  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:10.236013  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:10.236255  222063 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/pause-338254/id_rsa Username:docker}
	I1124 03:40:10.355172  222063 ssh_runner.go:195] Run: systemctl --version
	I1124 03:40:10.368902  222063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:40:10.540967  222063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:40:10.551722  222063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:40:10.551831  222063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:40:10.566234  222063 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:40:10.566267  222063 start.go:496] detecting cgroup driver to use...
	I1124 03:40:10.566338  222063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:40:10.597450  222063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:40:10.625760  222063 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:40:10.625849  222063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:40:10.655727  222063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:40:10.672934  222063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:40:10.866745  222063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:40:11.067857  222063 docker.go:234] disabling docker service ...
	I1124 03:40:11.067958  222063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:40:11.109348  222063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:40:11.131672  222063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:40:11.334180  222063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:40:11.525859  222063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:40:11.543942  222063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:40:11.569662  222063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:40:11.569739  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.584795  222063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 03:40:11.585235  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.598966  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.625060  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.646293  222063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:40:11.663717  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.680277  222063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.697052  222063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:40:11.711133  222063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:40:11.722490  222063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:40:11.734028  222063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:11.918683  222063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:40:12.184068  222063 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:40:12.184265  222063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:40:12.193189  222063 start.go:564] Will wait 60s for crictl version
	I1124 03:40:12.193285  222063 ssh_runner.go:195] Run: which crictl
	I1124 03:40:12.203863  222063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 03:40:12.275618  222063 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 03:40:12.275740  222063 ssh_runner.go:195] Run: crio --version
	I1124 03:40:12.321647  222063 ssh_runner.go:195] Run: crio --version
	I1124 03:40:12.366906  222063 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1124 03:40:12.904690  221986 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.735071383s)
	I1124 03:40:12.904735  221986 crio.go:469] duration metric: took 4.73575678s to extract the tarball
	I1124 03:40:12.904745  221986 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1124 03:40:12.963008  221986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:13.012972  221986 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:40:13.013005  221986 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:40:13.013014  221986 kubeadm.go:935] updating node { 192.168.61.33 8443 v1.34.1 crio true true} ...
	I1124 03:40:13.013206  221986 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-780317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-780317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:40:13.013295  221986 ssh_runner.go:195] Run: crio config
	I1124 03:40:13.076188  221986 cni.go:84] Creating CNI manager for ""
	I1124 03:40:13.076219  221986 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:40:13.076245  221986 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:40:13.076276  221986 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.33 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-780317 NodeName:embed-certs-780317 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:40:13.076573  221986 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-780317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.33"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.33"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:40:13.076664  221986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:40:13.089743  221986 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:40:13.089820  221986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:13.102586  221986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1124 03:40:13.127752  221986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:13.154474  221986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1124 03:40:13.186648  221986 ssh_runner.go:195] Run: grep 192.168.61.33	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:13.192098  221986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:13.208659  221986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:13.396479  221986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:13.428432  221986 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/embed-certs-780317 for IP: 192.168.61.33
	I1124 03:40:13.428471  221986 certs.go:195] generating shared ca certs ...
	I1124 03:40:13.428498  221986 certs.go:227] acquiring lock for ca certs: {Name:mk173959192d8348177ca5710cbe68cc42fae47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:13.428745  221986 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key
	I1124 03:40:13.428822  221986 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key
	I1124 03:40:13.428839  221986 certs.go:257] generating profile certs ...
	I1124 03:40:13.428964  221986 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/embed-certs-780317/client.key
	I1124 03:40:13.429041  221986 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/embed-certs-780317/apiserver.key.75f45f5b
	I1124 03:40:13.429135  221986 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/embed-certs-780317/proxy-client.key
	I1124 03:40:13.429313  221986 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749.pem (1338 bytes)
	W1124 03:40:13.429368  221986 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:13.429400  221986 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 03:40:13.429477  221986 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:13.429525  221986 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:13.429576  221986 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:13.429655  221986 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem (1708 bytes)
	I1124 03:40:13.430733  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:13.474893  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 03:40:13.530021  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:13.573219  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:13.607204  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/embed-certs-780317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:40:13.644247  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/embed-certs-780317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:40:13.685333  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/embed-certs-780317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:13.731073  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/embed-certs-780317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:40:13.779568  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem --> /usr/share/ca-certificates/1897492.pem (1708 bytes)
	I1124 03:40:13.825456  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:13.872469  221986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749.pem --> /usr/share/ca-certificates/189749.pem (1338 bytes)
	I1124 03:40:13.922041  221986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:13.955337  221986 ssh_runner.go:195] Run: openssl version
	I1124 03:40:13.964831  221986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1897492.pem && ln -fs /usr/share/ca-certificates/1897492.pem /etc/ssl/certs/1897492.pem"
	I1124 03:40:13.985255  221986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1897492.pem
	I1124 03:40:13.993356  221986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:47 /usr/share/ca-certificates/1897492.pem
	I1124 03:40:13.993459  221986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1897492.pem
	I1124 03:40:14.007339  221986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1897492.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:14.028541  221986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:14.050537  221986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:14.059770  221986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:39 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:14.059871  221986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:14.073256  221986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:14.097544  221986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/189749.pem && ln -fs /usr/share/ca-certificates/189749.pem /etc/ssl/certs/189749.pem"
	I1124 03:40:14.118497  221986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/189749.pem
	I1124 03:40:14.126891  221986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:47 /usr/share/ca-certificates/189749.pem
	I1124 03:40:14.126980  221986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/189749.pem
	I1124 03:40:14.139551  221986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/189749.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:14.166629  221986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:14.176633  221986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:40:14.193267  221986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:40:14.206347  221986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:40:14.217800  221986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:40:14.230111  221986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:40:14.243050  221986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:40:14.254727  221986 kubeadm.go:401] StartCluster: {Name:embed-certs-780317 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:embed-certs-780317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:14.254879  221986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:14.255216  221986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:14.307440  221986 cri.go:89] found id: ""
	I1124 03:40:14.307628  221986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:40:14.326612  221986 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:40:14.326642  221986 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:40:14.326717  221986 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:40:14.345758  221986 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:40:14.349759  221986 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-780317" does not appear in /home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 03:40:14.350493  221986 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-185833/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-780317" cluster setting kubeconfig missing "embed-certs-780317" context setting]
	I1124 03:40:14.351648  221986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/kubeconfig: {Name:mkcda9156e9d84203343cbeb8993f30147e2224f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:14.353819  221986 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:40:14.371206  221986 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.61.33
	I1124 03:40:14.371260  221986 kubeadm.go:1161] stopping kube-system containers ...
	I1124 03:40:14.371278  221986 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 03:40:14.371343  221986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:14.424224  221986 cri.go:89] found id: ""
	I1124 03:40:14.424334  221986 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 03:40:14.452215  221986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:40:14.470601  221986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:40:14.470627  221986 kubeadm.go:158] found existing configuration files:
	
	I1124 03:40:14.470684  221986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:40:14.486662  221986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:40:14.486741  221986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:40:14.504241  221986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:40:14.520294  221986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:40:14.520399  221986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:40:14.539327  221986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:40:14.558560  221986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:40:14.558634  221986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:40:14.576305  221986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:40:14.592187  221986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:40:14.592273  221986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:40:14.605977  221986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:40:14.620070  221986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:40:14.685807  221986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 03:40:12.371731  222063 main.go:143] libmachine: domain pause-338254 has defined MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:12.372221  222063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:e7:c6", ip: ""} in network mk-pause-338254: {Iface:virbr1 ExpiryTime:2025-11-24 04:38:41 +0000 UTC Type:0 Mac:52:54:00:f0:e7:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:pause-338254 Clientid:01:52:54:00:f0:e7:c6}
	I1124 03:40:12.372244  222063 main.go:143] libmachine: domain pause-338254 has defined IP address 192.168.39.187 and MAC address 52:54:00:f0:e7:c6 in network mk-pause-338254
	I1124 03:40:12.372483  222063 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1124 03:40:12.379476  222063 kubeadm.go:884] updating cluster {Name:pause-338254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-338254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:40:12.379671  222063 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:40:12.379745  222063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:12.435439  222063 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:40:12.435471  222063 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:40:12.435554  222063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:12.475715  222063 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:40:12.475753  222063 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:40:12.475764  222063 kubeadm.go:935] updating node { 192.168.39.187 8443 v1.34.1 crio true true} ...
	I1124 03:40:12.475911  222063 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-338254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-338254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:40:12.476014  222063 ssh_runner.go:195] Run: crio config
	I1124 03:40:12.540400  222063 cni.go:84] Creating CNI manager for ""
	I1124 03:40:12.540432  222063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:40:12.540457  222063 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:40:12.540488  222063 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-338254 NodeName:pause-338254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:40:12.540669  222063 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-338254"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.187"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:40:12.540770  222063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:40:12.561006  222063 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:40:12.561091  222063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:12.578250  222063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1124 03:40:12.612215  222063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:12.643652  222063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1124 03:40:12.670956  222063 ssh_runner.go:195] Run: grep 192.168.39.187	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:12.677964  222063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:12.882890  222063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:12.904724  222063 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254 for IP: 192.168.39.187
	I1124 03:40:12.904744  222063 certs.go:195] generating shared ca certs ...
	I1124 03:40:12.904769  222063 certs.go:227] acquiring lock for ca certs: {Name:mk173959192d8348177ca5710cbe68cc42fae47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:12.904966  222063 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key
	I1124 03:40:12.905052  222063 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key
	I1124 03:40:12.905067  222063 certs.go:257] generating profile certs ...
	I1124 03:40:12.905221  222063 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/client.key
	I1124 03:40:12.905352  222063 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/apiserver.key.c11b338a
	I1124 03:40:12.905445  222063 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/proxy-client.key
	I1124 03:40:12.905621  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749.pem (1338 bytes)
	W1124 03:40:12.905679  222063 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:12.905693  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 03:40:12.905738  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:12.905780  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:12.905809  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:12.905871  222063 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem (1708 bytes)
	I1124 03:40:12.906763  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:12.971486  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 03:40:13.008162  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:13.039860  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:13.073664  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 03:40:13.110448  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:40:13.153930  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:13.283788  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/pause-338254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:13.341414  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/certs/189749.pem --> /usr/share/ca-certificates/189749.pem (1338 bytes)
	I1124 03:40:13.400194  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/ssl/certs/1897492.pem --> /usr/share/ca-certificates/1897492.pem (1708 bytes)
	I1124 03:40:13.480265  222063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:13.599416  222063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:13.692426  222063 ssh_runner.go:195] Run: openssl version
	I1124 03:40:13.714045  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1897492.pem && ln -fs /usr/share/ca-certificates/1897492.pem /etc/ssl/certs/1897492.pem"
	I1124 03:40:13.750423  222063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1897492.pem
	I1124 03:40:13.765536  222063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:47 /usr/share/ca-certificates/1897492.pem
	I1124 03:40:13.765751  222063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1897492.pem
	I1124 03:40:13.788697  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1897492.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:13.824836  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:13.873459  222063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:13.884945  222063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:39 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:13.885029  222063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:13.900126  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:13.930441  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/189749.pem && ln -fs /usr/share/ca-certificates/189749.pem /etc/ssl/certs/189749.pem"
	I1124 03:40:13.959840  222063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/189749.pem
	I1124 03:40:13.978928  222063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:47 /usr/share/ca-certificates/189749.pem
	I1124 03:40:13.979015  222063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/189749.pem
	I1124 03:40:14.008504  222063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/189749.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:14.061447  222063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:14.083797  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:40:14.110385  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:40:14.132611  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:40:14.152598  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:40:14.204731  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:40:14.235528  222063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:40:14.294461  222063 kubeadm.go:401] StartCluster: {Name:pause-338254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-338254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:14.294641  222063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:14.294735  222063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:14.450649  222063 cri.go:89] found id: "1745a9dfdd711a8ee834728c86110dd2e50c0839b6a2ae5b7741ca646e4fa2cf"
	I1124 03:40:14.450683  222063 cri.go:89] found id: "6aa3ded1ef102324c911cc3a9284ffb02ad584ceaaa53d6767459fedd68b5ab0"
	I1124 03:40:14.450690  222063 cri.go:89] found id: "6dd9863b4e925db23c7f2417e2265709ea171629350bacc2b6f52cc973632214"
	I1124 03:40:14.450695  222063 cri.go:89] found id: "d838ec10bf519b6238f83e68f9bb42b155709dcb3557d8ef647b0a73c31cd0aa"
	I1124 03:40:14.450699  222063 cri.go:89] found id: "7b70cc751747d1a5ed60bd015f3df7de1c179505c3e57ab74febaa54f4092338"
	I1124 03:40:14.450707  222063 cri.go:89] found id: "3b6eb5c0748537dff2962b9590a1dbc87049ca8d280782be74af20026bdc6cac"
	I1124 03:40:14.450712  222063 cri.go:89] found id: "3020fb059fc9d0157f9512beada229b8ace02691eddde40e7fc146e1522e0734"
	I1124 03:40:14.450716  222063 cri.go:89] found id: "962ce9fe41009440988027b6ec6a31651dcca599dae679db54773a25c13da3fa"
	I1124 03:40:14.450720  222063 cri.go:89] found id: ""
	I1124 03:40:14.450780  222063 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-338254 -n pause-338254
helpers_test.go:269: (dbg) Run:  kubectl --context pause-338254 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-338254 -n pause-338254
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-338254 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-338254 logs -n 25: (1.318774363s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ addons  │ enable metrics-server -p embed-certs-780317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ stop    │ -p embed-certs-780317 --alsologtostderr -v=3                                                                                                                                                                                                │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-469670    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │                     │
	│ start   │ -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-469670    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ delete  │ -p kubernetes-upgrade-469670                                                                                                                                                                                                                │ kubernetes-upgrade-469670    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ start   │ -p pause-338254 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                                     │ pause-338254                 │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p cert-expiration-734487 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                                     │ cert-expiration-734487       │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p cert-expiration-734487                                                                                                                                                                                                                   │ cert-expiration-734487       │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p default-k8s-diff-port-871319 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-871319 │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ addons  │ enable dashboard -p no-preload-646844 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p no-preload-646844 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-780317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p embed-certs-780317 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ start   │ -p pause-338254 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-338254                 │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-871319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                          │ default-k8s-diff-port-871319 │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ stop    │ -p default-k8s-diff-port-871319 --alsologtostderr -v=3                                                                                                                                                                                      │ default-k8s-diff-port-871319 │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │                     │
	│ image   │ no-preload-646844 image list --format=json                                                                                                                                                                                                  │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ pause   │ -p no-preload-646844 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ unpause │ -p no-preload-646844 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ delete  │ -p no-preload-646844                                                                                                                                                                                                                        │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ image   │ embed-certs-780317 image list --format=json                                                                                                                                                                                                 │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ delete  │ -p no-preload-646844                                                                                                                                                                                                                        │ no-preload-646844            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ pause   │ -p embed-certs-780317 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ start   │ -p newest-cni-788142 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-788142            │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │                     │
	│ unpause │ -p embed-certs-780317 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-780317           │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:40:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:40:49.041756  222848 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:40:49.041935  222848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:40:49.041951  222848 out.go:374] Setting ErrFile to fd 2...
	I1124 03:40:49.041958  222848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:40:49.042253  222848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:40:49.042908  222848 out.go:368] Setting JSON to false
	I1124 03:40:49.043967  222848 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12189,"bootTime":1763943460,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:40:49.044028  222848 start.go:143] virtualization: kvm guest
	I1124 03:40:49.048513  222848 out.go:179] * [newest-cni-788142] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:40:49.050088  222848 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:40:49.050081  222848 notify.go:221] Checking for updates...
	I1124 03:40:49.051204  222848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:40:49.052284  222848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 03:40:49.053308  222848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 03:40:49.054507  222848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:40:49.055586  222848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:40:49.057275  222848 config.go:182] Loaded profile config "default-k8s-diff-port-871319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:40:49.057489  222848 config.go:182] Loaded profile config "embed-certs-780317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:40:49.057617  222848 config.go:182] Loaded profile config "guest-288632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1124 03:40:49.057773  222848 config.go:182] Loaded profile config "pause-338254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:40:49.057905  222848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:40:49.103477  222848 out.go:179] * Using the kvm2 driver based on user configuration
	I1124 03:40:49.104665  222848 start.go:309] selected driver: kvm2
	I1124 03:40:49.104680  222848 start.go:927] validating driver "kvm2" against <nil>
	I1124 03:40:49.104692  222848 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:40:49.105484  222848 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1124 03:40:49.105561  222848 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1124 03:40:49.105791  222848 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:40:49.105825  222848 cni.go:84] Creating CNI manager for ""
	I1124 03:40:49.105888  222848 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 03:40:49.105900  222848 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 03:40:49.105964  222848 start.go:353] cluster config:
	{Name:newest-cni-788142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-788142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:49.106079  222848 iso.go:125] acquiring lock: {Name:mk63ee8f30093c8c7d0696dd2486a8eb0d8bd024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:40:49.107496  222848 out.go:179] * Starting "newest-cni-788142" primary control-plane node in "newest-cni-788142" cluster
	I1124 03:40:49.108451  222848 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:40:49.108478  222848 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:40:49.108490  222848 cache.go:65] Caching tarball of preloaded images
	I1124 03:40:49.108567  222848 preload.go:238] Found /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:40:49.108581  222848 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:40:49.108700  222848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/newest-cni-788142/config.json ...
	I1124 03:40:49.108725  222848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/newest-cni-788142/config.json: {Name:mk889f84f5dd9fa9cc87cc69bde17211c622f313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:49.108892  222848 start.go:360] acquireMachinesLock for newest-cni-788142: {Name:mk6edb9cd27540c3b670af896ffc377aa954769e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 03:40:49.108931  222848 start.go:364] duration metric: took 22.031µs to acquireMachinesLock for "newest-cni-788142"
	I1124 03:40:49.108957  222848 start.go:93] Provisioning new machine with config: &{Name:newest-cni-788142 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.1 ClusterName:newest-cni-788142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:40:49.109024  222848 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.313309183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955651313286675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d41d375-1fb0-4061-9ea5-9d9939bd9acb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.314108647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bc2d15d-b640-4358-a05a-5963ecd728d9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.314188506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bc2d15d-b640-4358-a05a-5963ecd728d9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.314583353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d26b92c84c000e397cb5cd553cb9985c10191093636cf348385f3d88402b5d68,PodSandboxId:8022be719c08af46322366f4df30cd735b921d9d8bda7eda204dab3062c641bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763955635564805052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5ltvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38935252-f0dc-41c5-a694-768a35ae643e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82be8eb1cfd0def4f547e4bc20695197afe026027106fd1af681930a72f62b20,PodSandboxId:f9a62c7f608c527ef1f9b50c70b630673d6a5b59b30b1599a72c5fb8f26b70cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763955630961620012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b90bfc34fc179d98e0db0af69abcfa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592bfddef7f3843239a8ccbca41e75d742b5b47035eb720c22b687aadcfc3991,PodSandboxId:74794e9a087f7745423d0f3b26a1205105bfdf470b1941b54a8ac3f6b7de19c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763955630970142753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ee4fe75463e3abdfd79cb647e583f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828c8ce5d0da8c3d47fda88aab05483405c3812f380b667acbdab5894d7cd977,PodSandboxId:d0de2f0f548261a48da37c4277744ef9535dca5b92bdabd2ca36684138204b8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763955630947497442,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-338254,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 9be01ceadb61d50d8bbdb4bd8b16af46,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfd0b747372393f0fa0f57e342ec07f5b36354e56aca5b20c1c506663627d01,PodSandboxId:561b329fbdf7da9f5999abf913f4205c2e3949ddd09626ba7914ab31e6a27585,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763955614922593041,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-66bc5c9577-r62dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b22871-395b-4911-a020-13ac35cbd62c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9d4eaa90fb0345e57e83fc3f129e30b35b49269aefe44835a05b0d34bcb06a,PodSandboxId:2ead61f0c92a54e0422f74d00950c22ff8d9d43cffcf8675452fb337578b5d32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763955614214408014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd078a510b112b44279292cb0b6e699,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3107ea0c8b070f17d3c3521a2168bdf0343a9d16631ca685fce37c759507201e,PodSandboxId:d0de2f0f548261a48da37c4277744ef9535dca5b92bdab
d2ca36684138204b8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763955614112873416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be01ceadb61d50d8bbdb4bd8b16af46,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6e77d71f8cd87d5914a71b4dfde1812609a19db003223ac28acf6c3fb87f3cce,PodSandboxId:f9a62c7f608c527ef1f9b50c70b630673d6a5b59b30b1599a72c5fb8f26b70cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763955614069549456,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b90bfc34fc179d98e0db0af69abcfa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1745a9dfdd711a8ee834728c86110dd2e50c0839b6a2ae5b7741ca646e4fa2cf,PodSandboxId:74794e9a087f7745423d0f3b26a1205105bfdf470b1941b54a8ac3f6b7de19c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763955613915104958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ee4fe75463e3abdfd79cb647e583f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa3ded1ef102324c911cc3a9284ffb02ad584ceaaa53d6767459fedd68b5ab0,PodSandboxId:8022be719c08af46322366f4df30cd735b921d9d8bda7eda204dab3062c641bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763955613727275610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5ltvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38935252-f0dc-41c5-a694-768a35ae643e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d838ec10bf519b6238f83e68f9bb42b155709dcb3557d8ef647b0a73c31cd0aa,PodSandboxId:ab94fe59fcd027d6809643feefbddee387aff98410534a72c7b460c58fb57872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763955546278288663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r62dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b22871-395b-4911-a020-13ac35cbd62c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b70cc751747d1a5ed60bd015f3df7de1c179505c3e57ab74febaa54f4092338,PodSandboxId:48457135ed3b4be344867682d84cde9cce85add5e7542f3bd2ef8bc116f52ce6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763955534516722611,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd078a510b112b44279292cb0b6e699,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bc2d15d-b640-4358-a05a-5963ecd728d9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.359777530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ac84e67-aa6a-47cf-a4d6-55bb34be41ee name=/runtime.v1.RuntimeService/Version
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.359873225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ac84e67-aa6a-47cf-a4d6-55bb34be41ee name=/runtime.v1.RuntimeService/Version
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.361509964Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89fd3bd5-b9e5-4e36-aa9c-24e932768f86 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.362353064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955651362327014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89fd3bd5-b9e5-4e36-aa9c-24e932768f86 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.363360044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b70b6e67-4696-4338-8054-d50b95e1b6a5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.363494694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b70b6e67-4696-4338-8054-d50b95e1b6a5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.363787886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d26b92c84c000e397cb5cd553cb9985c10191093636cf348385f3d88402b5d68,PodSandboxId:8022be719c08af46322366f4df30cd735b921d9d8bda7eda204dab3062c641bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763955635564805052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5ltvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38935252-f0dc-41c5-a694-768a35ae643e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82be8eb1cfd0def4f547e4bc20695197afe026027106fd1af681930a72f62b20,PodSandboxId:f9a62c7f608c527ef1f9b50c70b630673d6a5b59b30b1599a72c5fb8f26b70cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763955630961620012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b90bfc34fc179d98e0db0af69abcfa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592bfddef7f3843239a8ccbca41e75d742b5b47035eb720c22b687aadcfc3991,PodSandboxId:74794e9a087f7745423d0f3b26a1205105bfdf470b1941b54a8ac3f6b7de19c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763955630970142753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ee4fe75463e3abdfd79cb647e583f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828c8ce5d0da8c3d47fda88aab05483405c3812f380b667acbdab5894d7cd977,PodSandboxId:d0de2f0f548261a48da37c4277744ef9535dca5b92bdabd2ca36684138204b8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763955630947497442,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-338254,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 9be01ceadb61d50d8bbdb4bd8b16af46,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfd0b747372393f0fa0f57e342ec07f5b36354e56aca5b20c1c506663627d01,PodSandboxId:561b329fbdf7da9f5999abf913f4205c2e3949ddd09626ba7914ab31e6a27585,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763955614922593041,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-66bc5c9577-r62dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b22871-395b-4911-a020-13ac35cbd62c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9d4eaa90fb0345e57e83fc3f129e30b35b49269aefe44835a05b0d34bcb06a,PodSandboxId:2ead61f0c92a54e0422f74d00950c22ff8d9d43cffcf8675452fb337578b5d32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763955614214408014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd078a510b112b44279292cb0b6e699,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3107ea0c8b070f17d3c3521a2168bdf0343a9d16631ca685fce37c759507201e,PodSandboxId:d0de2f0f548261a48da37c4277744ef9535dca5b92bdab
d2ca36684138204b8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763955614112873416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be01ceadb61d50d8bbdb4bd8b16af46,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6e77d71f8cd87d5914a71b4dfde1812609a19db003223ac28acf6c3fb87f3cce,PodSandboxId:f9a62c7f608c527ef1f9b50c70b630673d6a5b59b30b1599a72c5fb8f26b70cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763955614069549456,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b90bfc34fc179d98e0db0af69abcfa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1745a9dfdd711a8ee834728c86110dd2e50c0839b6a2ae5b7741ca646e4fa2cf,PodSandboxId:74794e9a087f7745423d0f3b26a1205105bfdf470b1941b54a8ac3f6b7de19c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763955613915104958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ee4fe75463e3abdfd79cb647e583f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa3ded1ef102324c911cc3a9284ffb02ad584ceaaa53d6767459fedd68b5ab0,PodSandboxId:8022be719c08af46322366f4df30cd735b921d9d8bda7eda204dab3062c641bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763955613727275610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5ltvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38935252-f0dc-41c5-a694-768a35ae643e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d838ec10bf519b6238f83e68f9bb42b155709dcb3557d8ef647b0a73c31cd0aa,PodSandboxId:ab94fe59fcd027d6809643feefbddee387aff98410534a72c7b460c58fb57872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763955546278288663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r62dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b22871-395b-4911-a020-13ac35cbd62c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b70cc751747d1a5ed60bd015f3df7de1c179505c3e57ab74febaa54f4092338,PodSandboxId:48457135ed3b4be344867682d84cde9cce85add5e7542f3bd2ef8bc116f52ce6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763955534516722611,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd078a510b112b44279292cb0b6e699,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b70b6e67-4696-4338-8054-d50b95e1b6a5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.406197192Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8d6bf4d-d7d8-4b25-bfd1-7b8cb624fd55 name=/runtime.v1.RuntimeService/Version
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.406621785Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8d6bf4d-d7d8-4b25-bfd1-7b8cb624fd55 name=/runtime.v1.RuntimeService/Version
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.408506924Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e835cfe9-7125-4f0e-8980-cd7764d32232 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.408985682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955651408962323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e835cfe9-7125-4f0e-8980-cd7764d32232 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.410309339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b409df62-a6ad-4b88-9ec3-7b82ebe55086 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.410398479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b409df62-a6ad-4b88-9ec3-7b82ebe55086 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.410944055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d26b92c84c000e397cb5cd553cb9985c10191093636cf348385f3d88402b5d68,PodSandboxId:8022be719c08af46322366f4df30cd735b921d9d8bda7eda204dab3062c641bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763955635564805052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5ltvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38935252-f0dc-41c5-a694-768a35ae643e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82be8eb1cfd0def4f547e4bc20695197afe026027106fd1af681930a72f62b20,PodSandboxId:f9a62c7f608c527ef1f9b50c70b630673d6a5b59b30b1599a72c5fb8f26b70cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763955630961620012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b90bfc34fc179d98e0db0af69abcfa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592bfddef7f3843239a8ccbca41e75d742b5b47035eb720c22b687aadcfc3991,PodSandboxId:74794e9a087f7745423d0f3b26a1205105bfdf470b1941b54a8ac3f6b7de19c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763955630970142753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ee4fe75463e3abdfd79cb647e583f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828c8ce5d0da8c3d47fda88aab05483405c3812f380b667acbdab5894d7cd977,PodSandboxId:d0de2f0f548261a48da37c4277744ef9535dca5b92bdabd2ca36684138204b8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763955630947497442,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-338254,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 9be01ceadb61d50d8bbdb4bd8b16af46,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfd0b747372393f0fa0f57e342ec07f5b36354e56aca5b20c1c506663627d01,PodSandboxId:561b329fbdf7da9f5999abf913f4205c2e3949ddd09626ba7914ab31e6a27585,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763955614922593041,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-66bc5c9577-r62dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b22871-395b-4911-a020-13ac35cbd62c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9d4eaa90fb0345e57e83fc3f129e30b35b49269aefe44835a05b0d34bcb06a,PodSandboxId:2ead61f0c92a54e0422f74d00950c22ff8d9d43cffcf8675452fb337578b5d32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763955614214408014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd078a510b112b44279292cb0b6e699,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3107ea0c8b070f17d3c3521a2168bdf0343a9d16631ca685fce37c759507201e,PodSandboxId:d0de2f0f548261a48da37c4277744ef9535dca5b92bdab
d2ca36684138204b8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763955614112873416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be01ceadb61d50d8bbdb4bd8b16af46,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6e77d71f8cd87d5914a71b4dfde1812609a19db003223ac28acf6c3fb87f3cce,PodSandboxId:f9a62c7f608c527ef1f9b50c70b630673d6a5b59b30b1599a72c5fb8f26b70cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763955614069549456,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b90bfc34fc179d98e0db0af69abcfa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1745a9dfdd711a8ee834728c86110dd2e50c0839b6a2ae5b7741ca646e4fa2cf,PodSandboxId:74794e9a087f7745423d0f3b26a1205105bfdf470b1941b54a8ac3f6b7de19c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763955613915104958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ee4fe75463e3abdfd79cb647e583f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa3ded1ef102324c911cc3a9284ffb02ad584ceaaa53d6767459fedd68b5ab0,PodSandboxId:8022be719c08af46322366f4df30cd735b921d9d8bda7eda204dab3062c641bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763955613727275610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5ltvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38935252-f0dc-41c5-a694-768a35ae643e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d838ec10bf519b6238f83e68f9bb42b155709dcb3557d8ef647b0a73c31cd0aa,PodSandboxId:ab94fe59fcd027d6809643feefbddee387aff98410534a72c7b460c58fb57872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763955546278288663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r62dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b22871-395b-4911-a020-13ac35cbd62c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b70cc751747d1a5ed60bd015f3df7de1c179505c3e57ab74febaa54f4092338,PodSandboxId:48457135ed3b4be344867682d84cde9cce85add5e7542f3bd2ef8bc116f52ce6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763955534516722611,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd078a510b112b44279292cb0b6e699,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b409df62-a6ad-4b88-9ec3-7b82ebe55086 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.464795459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24f11c0c-5a85-4c63-89eb-ec4968c7ac5f name=/runtime.v1.RuntimeService/Version
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.465063432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24f11c0c-5a85-4c63-89eb-ec4968c7ac5f name=/runtime.v1.RuntimeService/Version
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.467052459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1104af9-7fcc-4d13-a6f2-d2726258ce6a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.467786677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763955651467762935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1104af9-7fcc-4d13-a6f2-d2726258ce6a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.469024325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=125d8f3e-ba41-422f-aab9-182af675501d name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.469134313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=125d8f3e-ba41-422f-aab9-182af675501d name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 03:40:51 pause-338254 crio[2771]: time="2025-11-24 03:40:51.469615133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d26b92c84c000e397cb5cd553cb9985c10191093636cf348385f3d88402b5d68,PodSandboxId:8022be719c08af46322366f4df30cd735b921d9d8bda7eda204dab3062c641bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763955635564805052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5ltvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38935252-f0dc-41c5-a694-768a35ae643e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82be8eb1cfd0def4f547e4bc20695197afe026027106fd1af681930a72f62b20,PodSandboxId:f9a62c7f608c527ef1f9b50c70b630673d6a5b59b30b1599a72c5fb8f26b70cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763955630961620012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b90bfc34fc179d98e0db0af69abcfa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592bfddef7f3843239a8ccbca41e75d742b5b47035eb720c22b687aadcfc3991,PodSandboxId:74794e9a087f7745423d0f3b26a1205105bfdf470b1941b54a8ac3f6b7de19c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763955630970142753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ee4fe75463e3abdfd79cb647e583f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828c8ce5d0da8c3d47fda88aab05483405c3812f380b667acbdab5894d7cd977,PodSandboxId:d0de2f0f548261a48da37c4277744ef9535dca5b92bdabd2ca36684138204b8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763955630947497442,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-338254,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 9be01ceadb61d50d8bbdb4bd8b16af46,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfd0b747372393f0fa0f57e342ec07f5b36354e56aca5b20c1c506663627d01,PodSandboxId:561b329fbdf7da9f5999abf913f4205c2e3949ddd09626ba7914ab31e6a27585,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763955614922593041,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-66bc5c9577-r62dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b22871-395b-4911-a020-13ac35cbd62c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9d4eaa90fb0345e57e83fc3f129e30b35b49269aefe44835a05b0d34bcb06a,PodSandboxId:2ead61f0c92a54e0422f74d00950c22ff8d9d43cffcf8675452fb337578b5d32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763955614214408014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd078a510b112b44279292cb0b6e699,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3107ea0c8b070f17d3c3521a2168bdf0343a9d16631ca685fce37c759507201e,PodSandboxId:d0de2f0f548261a48da37c4277744ef9535dca5b92bdab
d2ca36684138204b8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763955614112873416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be01ceadb61d50d8bbdb4bd8b16af46,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6e77d71f8cd87d5914a71b4dfde1812609a19db003223ac28acf6c3fb87f3cce,PodSandboxId:f9a62c7f608c527ef1f9b50c70b630673d6a5b59b30b1599a72c5fb8f26b70cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763955614069549456,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b90bfc34fc179d98e0db0af69abcfa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1745a9dfdd711a8ee834728c86110dd2e50c0839b6a2ae5b7741ca646e4fa2cf,PodSandboxId:74794e9a087f7745423d0f3b26a1205105bfdf470b1941b54a8ac3f6b7de19c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763955613915104958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ee4fe75463e3abdfd79cb647e583f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa3ded1ef102324c911cc3a9284ffb02ad584ceaaa53d6767459fedd68b5ab0,PodSandboxId:8022be719c08af46322366f4df30cd735b921d9d8bda7eda204dab3062c641bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763955613727275610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5ltvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38935252-f0dc-41c5-a694-768a35ae643e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d838ec10bf519b6238f83e68f9bb42b155709dcb3557d8ef647b0a73c31cd0aa,PodSandboxId:ab94fe59fcd027d6809643feefbddee387aff98410534a72c7b460c58fb57872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763955546278288663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r62dg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b22871-395b-4911-a020-13ac35cbd62c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b70cc751747d1a5ed60bd015f3df7de1c179505c3e57ab74febaa54f4092338,PodSandboxId:48457135ed3b4be344867682d84cde9cce85add5e7542f3bd2ef8bc116f52ce6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763955534516722611,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-338254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd078a510b112b44279292cb0b6e699,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=125d8f3e-ba41-422f-aab9-182af675501d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d26b92c84c000       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   15 seconds ago       Running             kube-proxy                2                   8022be719c08a       kube-proxy-5ltvq                       kube-system
	592bfddef7f38       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   20 seconds ago       Running             kube-apiserver            2                   74794e9a087f7       kube-apiserver-pause-338254            kube-system
	82be8eb1cfd0d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   20 seconds ago       Running             kube-scheduler            2                   f9a62c7f608c5       kube-scheduler-pause-338254            kube-system
	828c8ce5d0da8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   20 seconds ago       Running             kube-controller-manager   2                   d0de2f0f54826       kube-controller-manager-pause-338254   kube-system
	2dfd0b7473723       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   36 seconds ago       Running             coredns                   1                   561b329fbdf7d       coredns-66bc5c9577-r62dg               kube-system
	ee9d4eaa90fb0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago       Running             etcd                      1                   2ead61f0c92a5       etcd-pause-338254                      kube-system
	3107ea0c8b070       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago       Exited              kube-controller-manager   1                   d0de2f0f54826       kube-controller-manager-pause-338254   kube-system
	6e77d71f8cd87       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago       Exited              kube-scheduler            1                   f9a62c7f608c5       kube-scheduler-pause-338254            kube-system
	1745a9dfdd711       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago       Exited              kube-apiserver            1                   74794e9a087f7       kube-apiserver-pause-338254            kube-system
	6aa3ded1ef102       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   37 seconds ago       Exited              kube-proxy                1                   8022be719c08a       kube-proxy-5ltvq                       kube-system
	d838ec10bf519       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   ab94fe59fcd02       coredns-66bc5c9577-r62dg               kube-system
	7b70cc751747d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   48457135ed3b4       etcd-pause-338254                      kube-system
	
	
	==> coredns [2dfd0b747372393f0fa0f57e342ec07f5b36354e56aca5b20c1c506663627d01] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36791 - 61040 "HINFO IN 1946233854260270824.1492493534150038087. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.050204586s
	
	
	==> coredns [d838ec10bf519b6238f83e68f9bb42b155709dcb3557d8ef647b0a73c31cd0aa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37050 - 56630 "HINFO IN 5802073376610894386.5965821267896532192. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.166629255s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-338254
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-338254
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=pause-338254
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_39_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:38:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-338254
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:40:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:40:34 +0000   Mon, 24 Nov 2025 03:38:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:40:34 +0000   Mon, 24 Nov 2025 03:38:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:40:34 +0000   Mon, 24 Nov 2025 03:38:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:40:34 +0000   Mon, 24 Nov 2025 03:39:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    pause-338254
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 62fc35a184d84a7499194f4e91de5cce
	  System UUID:                62fc35a1-84d8-4a74-9919-4f4e91de5cce
	  Boot ID:                    ca62fcec-bc30-46ce-b855-be0168af84d9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-r62dg                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     106s
	  kube-system                 etcd-pause-338254                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         112s
	  kube-system                 kube-apiserver-pause-338254             250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-pause-338254    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-5ltvq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-pause-338254             100m (5%)     0 (0%)      0 (0%)           0 (0%)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node pause-338254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node pause-338254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)  kubelet          Node pause-338254 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node pause-338254 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node pause-338254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node pause-338254 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  NodeReady                110s                 kubelet          Node pause-338254 status is now: NodeReady
	  Normal  RegisteredNode           107s                 node-controller  Node pause-338254 event: Registered Node pause-338254 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node pause-338254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node pause-338254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node pause-338254 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                  node-controller  Node pause-338254 event: Registered Node pause-338254 in Controller
	
	
	==> dmesg <==
	[Nov24 03:38] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001613] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001858] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.156340] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.091517] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.091869] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.126601] kauditd_printk_skb: 171 callbacks suppressed
	[Nov24 03:39] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025727] kauditd_printk_skb: 218 callbacks suppressed
	[  +0.547658] kauditd_printk_skb: 37 callbacks suppressed
	[Nov24 03:40] kauditd_printk_skb: 320 callbacks suppressed
	[  +5.547369] kauditd_printk_skb: 63 callbacks suppressed
	
	
	==> etcd [7b70cc751747d1a5ed60bd015f3df7de1c179505c3e57ab74febaa54f4092338] <==
	{"level":"warn","ts":"2025-11-24T03:38:56.600628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:38:56.619503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:38:56.634815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:38:56.641602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:38:56.744937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:39:38.410759Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.876434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-r62dg\" limit:1 ","response":"range_response_count:1 size:5630"}
	{"level":"info","ts":"2025-11-24T03:39:38.410922Z","caller":"traceutil/trace.go:172","msg":"trace[1226183934] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-r62dg; range_end:; response_count:1; response_revision:438; }","duration":"103.051215ms","start":"2025-11-24T03:39:38.307856Z","end":"2025-11-24T03:39:38.410907Z","steps":["trace[1226183934] 'range keys from in-memory index tree'  (duration: 102.709994ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:40:04.352787Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T03:40:04.353611Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-338254","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.187:2380"],"advertise-client-urls":["https://192.168.39.187:2379"]}
	{"level":"error","ts":"2025-11-24T03:40:04.357837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T03:40:04.361663Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T03:40:04.436570Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T03:40:04.436672Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f91ecb07db121930","current-leader-member-id":"f91ecb07db121930"}
	{"level":"info","ts":"2025-11-24T03:40:04.436809Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T03:40:04.436853Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-24T03:40:04.437042Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T03:40:04.437118Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T03:40:04.437127Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T03:40:04.437175Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.187:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T03:40:04.437186Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.187:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T03:40:04.437194Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.187:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T03:40:04.440402Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.187:2380"}
	{"level":"error","ts":"2025-11-24T03:40:04.440493Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.187:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T03:40:04.440518Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.187:2380"}
	{"level":"info","ts":"2025-11-24T03:40:04.440524Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-338254","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.187:2380"],"advertise-client-urls":["https://192.168.39.187:2379"]}
	
	
	==> etcd [ee9d4eaa90fb0345e57e83fc3f129e30b35b49269aefe44835a05b0d34bcb06a] <==
	{"level":"warn","ts":"2025-11-24T03:40:33.218542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.228678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.238623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.251496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.254586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.264552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.278610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.296962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.310860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.319339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.334946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.359691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.369682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.383650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.393750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.403571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.423627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.439809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.468610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.478128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.504692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.528513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.549775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.551360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:33.600856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36764","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:40:51 up 2 min,  0 users,  load average: 0.85, 0.42, 0.16
	Linux pause-338254 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Nov 24 01:33:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1745a9dfdd711a8ee834728c86110dd2e50c0839b6a2ae5b7741ca646e4fa2cf] <==
	I1124 03:40:17.655887       1 controller.go:176] quota evaluator worker shutdown
	I1124 03:40:17.656139       1 controller.go:176] quota evaluator worker shutdown
	I1124 03:40:17.656355       1 controller.go:176] quota evaluator worker shutdown
	I1124 03:40:17.656681       1 controller.go:176] quota evaluator worker shutdown
	I1124 03:40:17.656711       1 controller.go:176] quota evaluator worker shutdown
	E1124 03:40:18.267573       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1124 03:40:18.268131       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1124 03:40:19.266697       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1124 03:40:19.266999       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1124 03:40:20.266304       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1124 03:40:20.267017       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1124 03:40:21.266298       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1124 03:40:21.266967       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1124 03:40:22.267188       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1124 03:40:22.267331       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1124 03:40:23.266694       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1124 03:40:23.266869       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E1124 03:40:24.267388       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1124 03:40:24.267938       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1124 03:40:25.266382       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1124 03:40:25.267130       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1124 03:40:26.266814       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1124 03:40:26.267005       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1124 03:40:27.266026       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1124 03:40:27.267627       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [592bfddef7f3843239a8ccbca41e75d742b5b47035eb720c22b687aadcfc3991] <==
	I1124 03:40:34.433540       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 03:40:34.434708       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 03:40:34.441785       1 aggregator.go:171] initial CRD sync complete...
	I1124 03:40:34.441858       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 03:40:34.441889       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:40:34.441916       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:40:34.482776       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 03:40:34.484515       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 03:40:34.484550       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 03:40:34.484731       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 03:40:34.485063       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 03:40:34.485140       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:40:34.485206       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:40:34.498914       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:40:34.516530       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 03:40:34.516656       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 03:40:34.518257       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:40:35.361534       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:40:35.372504       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:40:36.022719       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:40:36.075189       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:40:36.112366       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:40:36.123752       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:40:37.967875       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:40:38.119908       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3107ea0c8b070f17d3c3521a2168bdf0343a9d16631ca685fce37c759507201e] <==
	
	
	==> kube-controller-manager [828c8ce5d0da8c3d47fda88aab05483405c3812f380b667acbdab5894d7cd977] <==
	I1124 03:40:37.839492       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 03:40:37.839588       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-338254"
	I1124 03:40:37.839663       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 03:40:37.846538       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:40:37.846641       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:40:37.846684       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:40:37.846712       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:40:37.846722       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:40:37.848133       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:40:37.852285       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:40:37.852560       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:40:37.854819       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 03:40:37.857133       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:40:37.860528       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 03:40:37.860600       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:40:37.863488       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:40:37.863689       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:40:37.864006       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:40:37.864017       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:40:37.864024       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:40:37.864311       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:40:37.864589       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:40:37.865467       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:40:37.867501       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:40:37.870594       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [6aa3ded1ef102324c911cc3a9284ffb02ad584ceaaa53d6767459fedd68b5ab0] <==
	I1124 03:40:17.675022       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:40:17.675358       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:40:17.675386       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:40:17.684501       1 config.go:200] "Starting service config controller"
	I1124 03:40:17.684516       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:40:17.685830       1 config.go:309] "Starting node config controller"
	I1124 03:40:17.685844       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:40:17.685850       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1124 03:40:17.686326       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.187:8443: connect: connection refused"
	E1124 03:40:17.686484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1124 03:40:17.686844       1 config.go:106] "Starting endpoint slice config controller"
	E1124 03:40:17.686876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	I1124 03:40:17.686984       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:40:17.687021       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:40:17.687036       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	E1124 03:40:17.687594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1124 03:40:18.554082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1124 03:40:19.134578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1124 03:40:19.261461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:40:20.614745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1124 03:40:20.829021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1124 03:40:22.176517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:40:25.094640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1124 03:40:25.503659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1124 03:40:26.879317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.187:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kube-proxy [d26b92c84c000e397cb5cd553cb9985c10191093636cf348385f3d88402b5d68] <==
	I1124 03:40:35.758715       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:40:35.859282       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:40:35.859334       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.187"]
	E1124 03:40:35.859455       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:40:35.910525       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1124 03:40:35.910610       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 03:40:35.910640       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:40:35.928644       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:40:35.928882       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:40:35.928894       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:40:35.936985       1 config.go:200] "Starting service config controller"
	I1124 03:40:35.937063       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:40:35.937111       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:40:35.937157       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:40:35.937195       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:40:35.937239       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:40:35.937959       1 config.go:309] "Starting node config controller"
	I1124 03:40:35.938027       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:40:35.938058       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:40:36.038058       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:40:36.038113       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:40:36.038121       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6e77d71f8cd87d5914a71b4dfde1812609a19db003223ac28acf6c3fb87f3cce] <==
	I1124 03:40:15.326629       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:40:17.323801       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:40:17.323921       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:40:17.323949       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:40:17.324010       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:40:17.444997       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:40:17.445048       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1124 03:40:17.445134       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1124 03:40:17.460515       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:40:17.460633       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:40:17.463137       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1124 03:40:17.463212       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1124 03:40:17.463360       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:40:17.463387       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:40:17.463950       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:40:17.464023       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 03:40:17.464143       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 03:40:17.464554       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 03:40:17.464582       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 03:40:17.464620       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [82be8eb1cfd0def4f547e4bc20695197afe026027106fd1af681930a72f62b20] <==
	I1124 03:40:32.171196       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:40:34.401032       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:40:34.401124       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:40:34.401147       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:40:34.401164       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:40:34.455862       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:40:34.458355       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:40:34.470970       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:40:34.471128       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:40:34.471211       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:40:34.471286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:40:34.571335       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:40:32 pause-338254 kubelet[3855]: E1124 03:40:32.461860    3855 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-338254\" not found" node="pause-338254"
	Nov 24 03:40:33 pause-338254 kubelet[3855]: E1124 03:40:33.463911    3855 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-338254\" not found" node="pause-338254"
	Nov 24 03:40:33 pause-338254 kubelet[3855]: E1124 03:40:33.464260    3855 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-338254\" not found" node="pause-338254"
	Nov 24 03:40:33 pause-338254 kubelet[3855]: E1124 03:40:33.465511    3855 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-338254\" not found" node="pause-338254"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: I1124 03:40:34.375724    3855 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-338254"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: I1124 03:40:34.464045    3855 kubelet_node_status.go:124] "Node was previously registered" node="pause-338254"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: I1124 03:40:34.464116    3855 kubelet_node_status.go:78] "Successfully registered node" node="pause-338254"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: I1124 03:40:34.464138    3855 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: I1124 03:40:34.468179    3855 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: E1124 03:40:34.545500    3855 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-338254\" already exists" pod="kube-system/kube-controller-manager-pause-338254"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: I1124 03:40:34.545665    3855 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-338254"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: E1124 03:40:34.555350    3855 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-338254\" already exists" pod="kube-system/kube-scheduler-pause-338254"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: I1124 03:40:34.555375    3855 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-338254"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: E1124 03:40:34.565148    3855 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-338254\" already exists" pod="kube-system/etcd-pause-338254"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: I1124 03:40:34.565303    3855 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-338254"
	Nov 24 03:40:34 pause-338254 kubelet[3855]: E1124 03:40:34.573693    3855 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-338254\" already exists" pod="kube-system/kube-apiserver-pause-338254"
	Nov 24 03:40:35 pause-338254 kubelet[3855]: I1124 03:40:35.242892    3855 apiserver.go:52] "Watching apiserver"
	Nov 24 03:40:35 pause-338254 kubelet[3855]: I1124 03:40:35.275704    3855 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 03:40:35 pause-338254 kubelet[3855]: I1124 03:40:35.303949    3855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38935252-f0dc-41c5-a694-768a35ae643e-lib-modules\") pod \"kube-proxy-5ltvq\" (UID: \"38935252-f0dc-41c5-a694-768a35ae643e\") " pod="kube-system/kube-proxy-5ltvq"
	Nov 24 03:40:35 pause-338254 kubelet[3855]: I1124 03:40:35.304104    3855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38935252-f0dc-41c5-a694-768a35ae643e-xtables-lock\") pod \"kube-proxy-5ltvq\" (UID: \"38935252-f0dc-41c5-a694-768a35ae643e\") " pod="kube-system/kube-proxy-5ltvq"
	Nov 24 03:40:35 pause-338254 kubelet[3855]: I1124 03:40:35.548559    3855 scope.go:117] "RemoveContainer" containerID="6aa3ded1ef102324c911cc3a9284ffb02ad584ceaaa53d6767459fedd68b5ab0"
	Nov 24 03:40:40 pause-338254 kubelet[3855]: E1124 03:40:40.431516    3855 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763955640430795901  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 24 03:40:40 pause-338254 kubelet[3855]: E1124 03:40:40.431548    3855 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763955640430795901  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 24 03:40:50 pause-338254 kubelet[3855]: E1124 03:40:50.436792    3855 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763955650435229466  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 24 03:40:50 pause-338254 kubelet[3855]: E1124 03:40:50.437375    3855 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763955650435229466  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-338254 -n pause-338254
helpers_test.go:269: (dbg) Run:  kubectl --context pause-338254 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (67.83s)

                                                
                                    

Test pass (302/345)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 24.86
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 12.63
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.66
22 TestOffline 99.81
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 128.37
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 11.52
35 TestAddons/parallel/Registry 18.92
36 TestAddons/parallel/RegistryCreds 0.96
38 TestAddons/parallel/InspektorGadget 12.27
39 TestAddons/parallel/MetricsServer 6.28
41 TestAddons/parallel/CSI 61.62
42 TestAddons/parallel/Headlamp 20.23
43 TestAddons/parallel/CloudSpanner 7.04
44 TestAddons/parallel/LocalPath 13.37
45 TestAddons/parallel/NvidiaDevicePlugin 6.7
46 TestAddons/parallel/Yakd 12.01
48 TestAddons/StoppedEnableDisable 84.9
49 TestCertOptions 65.02
50 TestCertExpiration 272.58
52 TestForceSystemdFlag 72.32
53 TestForceSystemdEnv 59.51
58 TestErrorSpam/setup 35.69
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.7
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.8
63 TestErrorSpam/stop 79.91
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 75.64
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 37.1
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
75 TestFunctional/serial/CacheCmd/cache/add_local 2.3
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 41.04
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.3
86 TestFunctional/serial/LogsFileCmd 1.3
87 TestFunctional/serial/InvalidService 4.27
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 29.59
91 TestFunctional/parallel/DryRun 0.26
92 TestFunctional/parallel/InternationalLanguage 0.13
93 TestFunctional/parallel/StatusCmd 0.68
97 TestFunctional/parallel/ServiceCmdConnect 10.58
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 46.96
101 TestFunctional/parallel/SSHCmd 0.39
102 TestFunctional/parallel/CpCmd 1.22
103 TestFunctional/parallel/MySQL 24.91
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.29
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
113 TestFunctional/parallel/License 0.49
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
125 TestFunctional/parallel/ProfileCmd/profile_list 0.31
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
127 TestFunctional/parallel/MountCmd/any-port 8.28
128 TestFunctional/parallel/ServiceCmd/List 0.25
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.23
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
131 TestFunctional/parallel/ServiceCmd/Format 0.37
132 TestFunctional/parallel/MountCmd/specific-port 1.57
133 TestFunctional/parallel/ServiceCmd/URL 0.35
134 TestFunctional/parallel/Version/short 0.08
135 TestFunctional/parallel/Version/components 0.42
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
140 TestFunctional/parallel/ImageCommands/ImageBuild 6.87
141 TestFunctional/parallel/ImageCommands/Setup 1.95
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.23
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.97
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.74
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.65
150 TestFunctional/parallel/ImageCommands/ImageRemove 2.39
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.26
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.62
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 196.61
161 TestMultiControlPlane/serial/DeployApp 7.53
162 TestMultiControlPlane/serial/PingHostFromPods 1.42
163 TestMultiControlPlane/serial/AddWorkerNode 42.72
164 TestMultiControlPlane/serial/NodeLabels 0.08
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.67
166 TestMultiControlPlane/serial/CopyFile 11.14
167 TestMultiControlPlane/serial/StopSecondaryNode 71.52
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.53
169 TestMultiControlPlane/serial/RestartSecondaryNode 37.65
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.76
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 366.94
172 TestMultiControlPlane/serial/DeleteSecondaryNode 17.96
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
174 TestMultiControlPlane/serial/StopCluster 243.94
175 TestMultiControlPlane/serial/RestartCluster 95.76
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
177 TestMultiControlPlane/serial/AddSecondaryNode 101.14
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
183 TestJSONOutput/start/Command 79.6
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.74
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.65
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.1
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.25
211 TestMainNoArgs 0.07
212 TestMinikubeProfile 79.19
215 TestMountStart/serial/StartWithMountFirst 21.22
216 TestMountStart/serial/VerifyMountFirst 0.32
217 TestMountStart/serial/StartWithMountSecond 20.36
218 TestMountStart/serial/VerifyMountSecond 0.3
219 TestMountStart/serial/DeleteFirst 0.72
220 TestMountStart/serial/VerifyMountPostDelete 0.31
221 TestMountStart/serial/Stop 1.36
222 TestMountStart/serial/RestartStopped 18.78
223 TestMountStart/serial/VerifyMountPostStop 0.31
226 TestMultiNode/serial/FreshStart2Nodes 99.3
227 TestMultiNode/serial/DeployApp2Nodes 6.47
228 TestMultiNode/serial/PingHostFrom2Pods 0.91
229 TestMultiNode/serial/AddNode 44.59
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.47
232 TestMultiNode/serial/CopyFile 6.24
233 TestMultiNode/serial/StopNode 2.3
234 TestMultiNode/serial/StartAfterStop 41.3
235 TestMultiNode/serial/RestartKeepsNodes 290.85
236 TestMultiNode/serial/DeleteNode 2.62
237 TestMultiNode/serial/StopMultiNode 156.6
238 TestMultiNode/serial/RestartMultiNode 83.22
239 TestMultiNode/serial/ValidateNameConflict 40.42
246 TestScheduledStopUnix 107.19
250 TestRunningBinaryUpgrade 119.4
252 TestKubernetesUpgrade 83.5
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestStartStop/group/old-k8s-version/serial/FirstStart 93.17
265 TestNoKubernetes/serial/StartWithK8s 77.07
266 TestNoKubernetes/serial/StartWithStopK8s 6.57
267 TestNoKubernetes/serial/Start 20.07
268 TestStartStop/group/old-k8s-version/serial/DeployApp 11.56
276 TestNetworkPlugins/group/false 4.27
277 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
279 TestNoKubernetes/serial/ProfileList 2.14
280 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.32
281 TestStartStop/group/old-k8s-version/serial/Stop 83.45
282 TestNoKubernetes/serial/Stop 1.51
283 TestNoKubernetes/serial/StartNoArgs 17.88
287 TestISOImage/Setup 30.04
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
290 TestISOImage/Binaries/crictl 0.16
291 TestISOImage/Binaries/curl 0.17
292 TestISOImage/Binaries/docker 0.17
293 TestISOImage/Binaries/git 0.16
294 TestISOImage/Binaries/iptables 0.16
295 TestISOImage/Binaries/podman 0.16
296 TestISOImage/Binaries/rsync 0.17
297 TestISOImage/Binaries/socat 0.17
298 TestISOImage/Binaries/wget 0.16
299 TestISOImage/Binaries/VBoxControl 0.17
300 TestISOImage/Binaries/VBoxService 0.26
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.72
302 TestStartStop/group/old-k8s-version/serial/SecondStart 72.94
304 TestStartStop/group/no-preload/serial/FirstStart 132.09
306 TestStartStop/group/embed-certs/serial/FirstStart 95.43
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
310 TestStartStop/group/old-k8s-version/serial/Pause 2.98
311 TestStartStop/group/no-preload/serial/DeployApp 10.32
312 TestStartStop/group/embed-certs/serial/DeployApp 11.28
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
314 TestStartStop/group/no-preload/serial/Stop 85.01
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
316 TestStartStop/group/embed-certs/serial/Stop 83.53
318 TestPause/serial/Start 77.98
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.42
321 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
322 TestStartStop/group/no-preload/serial/SecondStart 67.23
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/embed-certs/serial/SecondStart 62.85
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.37
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 88.41
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
333 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
334 TestStartStop/group/no-preload/serial/Pause 2.53
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
336 TestStartStop/group/embed-certs/serial/Pause 2.86
338 TestStartStop/group/newest-cni/serial/FirstStart 45.63
339 TestStoppedBinaryUpgrade/Setup 3.04
340 TestNetworkPlugins/group/auto/Start 97.59
341 TestStoppedBinaryUpgrade/Upgrade 119.94
342 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.38
344 TestStartStop/group/newest-cni/serial/Stop 11.57
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
346 TestStartStop/group/newest-cni/serial/SecondStart 34.17
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
348 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 59.65
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
352 TestStartStop/group/newest-cni/serial/Pause 3.99
353 TestNetworkPlugins/group/kindnet/Start 99.3
354 TestNetworkPlugins/group/auto/KubeletFlags 0.21
355 TestNetworkPlugins/group/auto/NetCatPod 11.29
356 TestNetworkPlugins/group/auto/DNS 0.17
357 TestNetworkPlugins/group/auto/Localhost 0.16
358 TestNetworkPlugins/group/auto/HairPin 0.14
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 16.01
360 TestStoppedBinaryUpgrade/MinikubeLogs 1.35
361 TestNetworkPlugins/group/calico/Start 72.89
362 TestNetworkPlugins/group/custom-flannel/Start 90.47
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.61
366 TestNetworkPlugins/group/enable-default-cni/Start 113.18
367 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
370 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
371 TestNetworkPlugins/group/calico/KubeletFlags 0.19
372 TestNetworkPlugins/group/calico/NetCatPod 12.25
373 TestNetworkPlugins/group/kindnet/DNS 0.14
374 TestNetworkPlugins/group/kindnet/Localhost 0.12
375 TestNetworkPlugins/group/kindnet/HairPin 0.15
376 TestNetworkPlugins/group/calico/DNS 0.19
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.19
378 TestNetworkPlugins/group/calico/Localhost 0.15
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.32
380 TestNetworkPlugins/group/calico/HairPin 0.2
381 TestNetworkPlugins/group/flannel/Start 68.11
382 TestNetworkPlugins/group/custom-flannel/DNS 0.17
383 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
384 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
385 TestNetworkPlugins/group/bridge/Start 94.59
387 TestISOImage/PersistentMounts//data 0.2
388 TestISOImage/PersistentMounts//var/lib/docker 0.17
389 TestISOImage/PersistentMounts//var/lib/cni 0.16
390 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
391 TestISOImage/PersistentMounts//var/lib/minikube 0.17
392 TestISOImage/PersistentMounts//var/lib/toolbox 0.18
393 TestISOImage/PersistentMounts//var/lib/boot2docker 0.17
394 TestISOImage/VersionJSON 0.19
395 TestISOImage/eBPFSupport 0.18
396 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
397 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.25
398 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
399 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
400 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
401 TestNetworkPlugins/group/flannel/ControllerPod 6.01
402 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
403 TestNetworkPlugins/group/flannel/NetCatPod 10.25
404 TestNetworkPlugins/group/flannel/DNS 0.15
405 TestNetworkPlugins/group/flannel/Localhost 0.13
406 TestNetworkPlugins/group/flannel/HairPin 0.12
407 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
408 TestNetworkPlugins/group/bridge/NetCatPod 10.26
409 TestNetworkPlugins/group/bridge/DNS 0.14
410 TestNetworkPlugins/group/bridge/Localhost 0.13
411 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (24.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-296965 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-296965 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.862170902s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (24.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 02:38:22.053008  189749 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1124 02:38:22.053126  189749 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-296965
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-296965: exit status 85 (76.478076ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-296965 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-296965 │ jenkins │ v1.37.0 │ 24 Nov 25 02:37 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:37:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:37:57.244506  189761 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:37:57.244761  189761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:37:57.244771  189761 out.go:374] Setting ErrFile to fd 2...
	I1124 02:37:57.244775  189761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:37:57.244944  189761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	W1124 02:37:57.245065  189761 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21975-185833/.minikube/config/config.json: open /home/jenkins/minikube-integration/21975-185833/.minikube/config/config.json: no such file or directory
	I1124 02:37:57.245551  189761 out.go:368] Setting JSON to true
	I1124 02:37:57.246427  189761 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8417,"bootTime":1763943460,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:37:57.246482  189761 start.go:143] virtualization: kvm guest
	I1124 02:37:57.250983  189761 out.go:99] [download-only-296965] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:37:57.251105  189761 notify.go:221] Checking for updates...
	W1124 02:37:57.251139  189761 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 02:37:57.252306  189761 out.go:171] MINIKUBE_LOCATION=21975
	I1124 02:37:57.253548  189761 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:37:57.254802  189761 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 02:37:57.255983  189761 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 02:37:57.259868  189761 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 02:37:57.261913  189761 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 02:37:57.262115  189761 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:37:57.296970  189761 out.go:99] Using the kvm2 driver based on user configuration
	I1124 02:37:57.297005  189761 start.go:309] selected driver: kvm2
	I1124 02:37:57.297018  189761 start.go:927] validating driver "kvm2" against <nil>
	I1124 02:37:57.297343  189761 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:37:57.297855  189761 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1124 02:37:57.298016  189761 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 02:37:57.298049  189761 cni.go:84] Creating CNI manager for ""
	I1124 02:37:57.298112  189761 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 02:37:57.298129  189761 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 02:37:57.298190  189761 start.go:353] cluster config:
	{Name:download-only-296965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-296965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:37:57.298424  189761 iso.go:125] acquiring lock: {Name:mk63ee8f30093c8c7d0696dd2486a8eb0d8bd024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 02:37:57.299787  189761 out.go:99] Downloading VM boot image ...
	I1124 02:37:57.299819  189761 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21975-185833/.minikube/cache/iso/amd64/minikube-v1.37.0-1763935228-21975-amd64.iso
	I1124 02:38:08.707478  189761 out.go:99] Starting "download-only-296965" primary control-plane node in "download-only-296965" cluster
	I1124 02:38:08.707532  189761 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 02:38:08.813253  189761 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1124 02:38:08.813284  189761 cache.go:65] Caching tarball of preloaded images
	I1124 02:38:08.813475  189761 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 02:38:08.815042  189761 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 02:38:08.815058  189761 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1124 02:38:08.921025  189761 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1124 02:38:08.921161  189761 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-296965 host does not exist
	  To start a cluster, run: "minikube start -p download-only-296965"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-296965
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-824012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-824012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.63284806s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1124 02:38:35.075078  189749 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1124 02:38:35.075121  189749 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-824012
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-824012: exit status 85 (75.780481ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-296965 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-296965 │ jenkins │ v1.37.0 │ 24 Nov 25 02:37 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ delete  │ -p download-only-296965                                                                                                                                                 │ download-only-296965 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ start   │ -o=json --download-only -p download-only-824012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-824012 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:38:22
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:38:22.497638  190005 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:38:22.497741  190005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:38:22.497751  190005 out.go:374] Setting ErrFile to fd 2...
	I1124 02:38:22.497755  190005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:38:22.497973  190005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 02:38:22.498445  190005 out.go:368] Setting JSON to true
	I1124 02:38:22.499268  190005 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8442,"bootTime":1763943460,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:38:22.499322  190005 start.go:143] virtualization: kvm guest
	I1124 02:38:22.501102  190005 out.go:99] [download-only-824012] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:38:22.501232  190005 notify.go:221] Checking for updates...
	I1124 02:38:22.502474  190005 out.go:171] MINIKUBE_LOCATION=21975
	I1124 02:38:22.503764  190005 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:38:22.504905  190005 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 02:38:22.505885  190005 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 02:38:22.506935  190005 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 02:38:22.508815  190005 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 02:38:22.509072  190005 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:38:22.540072  190005 out.go:99] Using the kvm2 driver based on user configuration
	I1124 02:38:22.540107  190005 start.go:309] selected driver: kvm2
	I1124 02:38:22.540116  190005 start.go:927] validating driver "kvm2" against <nil>
	I1124 02:38:22.540458  190005 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:38:22.540943  190005 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1124 02:38:22.541090  190005 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 02:38:22.541123  190005 cni.go:84] Creating CNI manager for ""
	I1124 02:38:22.541174  190005 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 02:38:22.541180  190005 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 02:38:22.541229  190005 start.go:353] cluster config:
	{Name:download-only-824012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-824012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:38:22.541322  190005 iso.go:125] acquiring lock: {Name:mk63ee8f30093c8c7d0696dd2486a8eb0d8bd024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 02:38:22.542781  190005 out.go:99] Starting "download-only-824012" primary control-plane node in "download-only-824012" cluster
	I1124 02:38:22.542819  190005 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 02:38:22.665081  190005 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 02:38:22.665141  190005 cache.go:65] Caching tarball of preloaded images
	I1124 02:38:22.665349  190005 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 02:38:22.666900  190005 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1124 02:38:22.666927  190005 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1124 02:38:22.776984  190005 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1124 02:38:22.777047  190005 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21975-185833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-824012 host does not exist
	  To start a cluster, run: "minikube start -p download-only-824012"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-824012
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 02:38:35.741131  189749 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-467261 --alsologtostderr --binary-mirror http://127.0.0.1:44003 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-467261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-467261
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (99.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-340575 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-340575 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.901969294s)
helpers_test.go:175: Cleaning up "offline-crio-340575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-340575
--- PASS: TestOffline (99.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-775116
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-775116: exit status 85 (67.383063ms)

                                                
                                                
-- stdout --
	* Profile "addons-775116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-775116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-775116
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-775116: exit status 85 (66.873083ms)

                                                
                                                
-- stdout --
	* Profile "addons-775116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-775116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (128.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-775116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-775116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.367615734s)
--- PASS: TestAddons/Setup (128.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-775116 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-775116 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-775116 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-775116 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cf30ca55-b9ac-463f-85eb-2b8d09b207a3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cf30ca55-b9ac-463f-85eb-2b8d09b207a3] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003998085s
addons_test.go:694: (dbg) Run:  kubectl --context addons-775116 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-775116 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-775116 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.675414ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-r9pj2" [2b90c53d-da92-4cbd-b0c1-9bdc2175baac] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002256133s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-tgmzc" [820cb950-bbf5-4368-a91e-279938b4d42c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003623338s
addons_test.go:392: (dbg) Run:  kubectl --context addons-775116 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-775116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-775116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.542403778s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 ip
2025/11/24 02:41:22 [DEBUG] GET http://192.168.39.95:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775116 addons disable registry --alsologtostderr -v=1: (1.154980492s)
--- PASS: TestAddons/parallel/Registry (18.92s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.96s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 15.772517ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-775116
addons_test.go:332: (dbg) Run:  kubectl --context addons-775116 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-pw7cd" [0f332b27-dde5-454c-be62-35092f748b50] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00465641s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775116 addons disable inspektor-gadget --alsologtostderr -v=1: (6.267782719s)
--- PASS: TestAddons/parallel/InspektorGadget (12.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.660613ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-bh5pm" [37ba89e4-4050-4ee0-94e4-767ac24d4f1c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.209725115s
addons_test.go:463: (dbg) Run:  kubectl --context addons-775116 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 02:41:18.502068  189749 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 02:41:18.506288  189749 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 02:41:18.506314  189749 kapi.go:107] duration metric: took 4.278042ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.288101ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-775116 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-775116 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a05bd772-33f0-4b2b-aaaf-fe74cf2a6cad] Pending
helpers_test.go:352: "task-pv-pod" [a05bd772-33f0-4b2b-aaaf-fe74cf2a6cad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a05bd772-33f0-4b2b-aaaf-fe74cf2a6cad] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004594634s
addons_test.go:572: (dbg) Run:  kubectl --context addons-775116 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-775116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-775116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-775116 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-775116 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-775116 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-775116 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8c7f5130-33bc-460d-87f7-0c7525f38f97] Pending
helpers_test.go:352: "task-pv-pod-restore" [8c7f5130-33bc-460d-87f7-0c7525f38f97] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8c7f5130-33bc-460d-87f7-0c7525f38f97] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005947134s
addons_test.go:614: (dbg) Run:  kubectl --context addons-775116 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-775116 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-775116 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775116 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.078033061s)
--- PASS: TestAddons/parallel/CSI (61.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-775116 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-775116 --alsologtostderr -v=1: (1.144856201s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-ghnxc" [a5d00a6d-5f16-49dd-9f99-cf16cd91f145] Pending
helpers_test.go:352: "headlamp-dfcdc64b-ghnxc" [a5d00a6d-5f16-49dd-9f99-cf16cd91f145] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-ghnxc" [a5d00a6d-5f16-49dd-9f99-cf16cd91f145] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.007320854s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775116 addons disable headlamp --alsologtostderr -v=1: (6.07311147s)
--- PASS: TestAddons/parallel/Headlamp (20.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-mfjtn" [5d0e7f2c-5e7a-4d65-aa59-7ecdcdb63c84] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005523863s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775116 addons disable cloud-spanner --alsologtostderr -v=1: (1.011519735s)
--- PASS: TestAddons/parallel/CloudSpanner (7.04s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-775116 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-775116 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775116 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d6fd80a5-18be-4fc7-9b9e-2108e1e7784a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d6fd80a5-18be-4fc7-9b9e-2108e1e7784a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d6fd80a5-18be-4fc7-9b9e-2108e1e7784a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005620646s
addons_test.go:967: (dbg) Run:  kubectl --context addons-775116 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 ssh "cat /opt/local-path-provisioner/pvc-ad5e62ee-345d-4806-badd-0fe8f1bfff03_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-775116 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-775116 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.37s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5pz67" [0859f4f5-557a-4bbe-a610-e1d991b4d68d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004523491s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xpf97" [e2064bd6-ec2f-4b36-91be-872af9f387f2] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005624496s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-775116 addons disable yakd --alsologtostderr -v=1: (5.99834514s)
--- PASS: TestAddons/parallel/Yakd (12.01s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (84.9s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-775116
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-775116: (1m24.681485376s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-775116
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-775116
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-775116
--- PASS: TestAddons/StoppedEnableDisable (84.90s)

                                                
                                    
x
+
TestCertOptions (65.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-103698 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1124 03:35:19.098953  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:35:28.573070  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-103698 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m3.434553229s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-103698 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-103698 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-103698 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-103698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-103698
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-103698: (1.130835999s)
--- PASS: TestCertOptions (65.02s)

                                                
                                    
x
+
TestCertExpiration (272.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-734487 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-734487 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m14.207987359s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-734487 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E1124 03:38:58.048083  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:39:03.169537  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-734487 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (17.580614159s)
helpers_test.go:175: Cleaning up "cert-expiration-734487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-734487
E1124 03:39:13.411824  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestCertExpiration (272.58s)

                                                
                                    
x
+
TestForceSystemdFlag (72.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-522492 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-522492 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.626251141s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-522492 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-522492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-522492
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-522492: (1.505120972s)
--- PASS: TestForceSystemdFlag (72.32s)

                                                
                                    
x
+
TestForceSystemdEnv (59.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-636403 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-636403 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.094469993s)
helpers_test.go:175: Cleaning up "force-systemd-env-636403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-636403
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-636403: (1.414648257s)
--- PASS: TestForceSystemdEnv (59.51s)

                                                
                                    
x
+
TestErrorSpam/setup (35.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-658793 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-658793 --driver=kvm2  --container-runtime=crio
E1124 02:45:45.498591  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:45.505013  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:45.516507  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:45.537941  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:45.579354  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:45.660807  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:45.822451  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:46.144194  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:46.786241  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:48.068578  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:50.630063  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:45:55.751984  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-658793 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-658793 --driver=kvm2  --container-runtime=crio: (35.689723264s)
--- PASS: TestErrorSpam/setup (35.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 status
E1124 02:46:05.993934  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (79.91s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 stop
E1124 02:46:26.475729  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:47:07.438830  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 stop: (1m16.87482686s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 stop: (1.422421545s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-658793 --log_dir /tmp/nospam-658793 stop: (1.615637416s)
--- PASS: TestErrorSpam/stop (79.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21975-185833/.minikube/files/etc/test/nested/copy/189749/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-803727 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1124 02:48:29.363592  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-803727 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m15.636842056s)
--- PASS: TestFunctional/serial/StartWithProxy (75.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 02:48:45.887068  189749 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-803727 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-803727 --alsologtostderr -v=8: (37.096933947s)
functional_test.go:678: soft start took 37.097759779s for "functional-803727" cluster.
I1124 02:49:22.984489  189749 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (37.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-803727 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 cache add registry.k8s.io/pause:3.1: (1.121227946s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 cache add registry.k8s.io/pause:3.3: (1.027910051s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 cache add registry.k8s.io/pause:latest: (1.059120637s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-803727 /tmp/TestFunctionalserialCacheCmdcacheadd_local2407579930/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 cache add minikube-local-cache-test:functional-803727
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 cache add minikube-local-cache-test:functional-803727: (1.93263721s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 cache delete minikube-local-cache-test:functional-803727
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-803727
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-803727 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (180.338803ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 kubectl -- --context functional-803727 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-803727 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-803727 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-803727 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.039538523s)
functional_test.go:776: restart took 41.039686511s for "functional-803727" cluster.
I1124 02:50:11.945668  189749 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-803727 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 logs: (1.296696478s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 logs --file /tmp/TestFunctionalserialLogsFileCmd2487750991/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 logs --file /tmp/TestFunctionalserialLogsFileCmd2487750991/001/logs.txt: (1.30177348s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-803727 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-803727
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-803727: exit status 115 (268.327931ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.230:32388 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-803727 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-803727 config get cpus: exit status 14 (76.214341ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-803727 config get cpus: exit status 14 (71.476946ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-803727 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-803727 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 196110: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-803727 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-803727 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (128.49962ms)

                                                
                                                
-- stdout --
	* [functional-803727] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:50:31.268355  195838 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:50:31.268654  195838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:50:31.268664  195838 out.go:374] Setting ErrFile to fd 2...
	I1124 02:50:31.268668  195838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:50:31.268856  195838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 02:50:31.269365  195838 out.go:368] Setting JSON to false
	I1124 02:50:31.270312  195838 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9171,"bootTime":1763943460,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:50:31.270417  195838 start.go:143] virtualization: kvm guest
	I1124 02:50:31.271899  195838 out.go:179] * [functional-803727] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:50:31.273180  195838 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:50:31.273231  195838 notify.go:221] Checking for updates...
	I1124 02:50:31.275287  195838 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:50:31.276863  195838 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 02:50:31.277985  195838 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 02:50:31.279055  195838 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:50:31.280169  195838 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:50:31.281891  195838 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:50:31.282683  195838 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:50:31.320975  195838 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 02:50:31.322076  195838 start.go:309] selected driver: kvm2
	I1124 02:50:31.322090  195838 start.go:927] validating driver "kvm2" against &{Name:functional-803727 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-803727 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:50:31.322267  195838 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:50:31.324143  195838 out.go:203] 
	W1124 02:50:31.325212  195838 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 02:50:31.326217  195838 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-803727 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-803727 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-803727 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (127.150817ms)

                                                
                                                
-- stdout --
	* [functional-803727] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:50:31.524488  195877 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:50:31.524606  195877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:50:31.524617  195877 out.go:374] Setting ErrFile to fd 2...
	I1124 02:50:31.524623  195877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:50:31.524961  195877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 02:50:31.525533  195877 out.go:368] Setting JSON to false
	I1124 02:50:31.526603  195877 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9171,"bootTime":1763943460,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:50:31.526683  195877 start.go:143] virtualization: kvm guest
	I1124 02:50:31.528232  195877 out.go:179] * [functional-803727] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 02:50:31.529647  195877 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:50:31.529700  195877 notify.go:221] Checking for updates...
	I1124 02:50:31.531856  195877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:50:31.533863  195877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 02:50:31.535126  195877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 02:50:31.536227  195877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:50:31.537404  195877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:50:31.538970  195877 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:50:31.539565  195877 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:50:31.574948  195877 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1124 02:50:31.576129  195877 start.go:309] selected driver: kvm2
	I1124 02:50:31.576145  195877 start.go:927] validating driver "kvm2" against &{Name:functional-803727 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-803727 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:50:31.576301  195877 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:50:31.578296  195877 out.go:203] 
	W1124 02:50:31.579385  195877 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 02:50:31.580352  195877 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-803727 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-803727 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-zqtsq" [7e75b71c-8a30-4999-b936-22d4d9257d38] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-zqtsq" [7e75b71c-8a30-4999-b936-22d4d9257d38] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004775325s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.230:30812
functional_test.go:1680: http://192.168.39.230:30812: success! body:
Request served by hello-node-connect-7d85dfc575-zqtsq

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.230:30812
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [940b353c-8a1e-44df-9dab-cfd44c532a56] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003204907s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-803727 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-803727 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-803727 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-803727 apply -f testdata/storage-provisioner/pod.yaml
I1124 02:50:25.741032  189749 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f6227077-d6cc-498c-871c-4ed293a7c45c] Pending
helpers_test.go:352: "sp-pod" [f6227077-d6cc-498c-871c-4ed293a7c45c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f6227077-d6cc-498c-871c-4ed293a7c45c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.00473762s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-803727 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-803727 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-803727 delete -f testdata/storage-provisioner/pod.yaml: (2.168700687s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-803727 apply -f testdata/storage-provisioner/pod.yaml
I1124 02:50:44.201137  189749 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [01834a7c-f005-469a-8f43-e144eb82ecee] Pending
helpers_test.go:352: "sp-pod" [01834a7c-f005-469a-8f43-e144eb82ecee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1124 02:50:45.498598  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [01834a7c-f005-469a-8f43-e144eb82ecee] Running
2025/11/24 02:51:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.003966721s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-803727 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh -n functional-803727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 cp functional-803727:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd279098891/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh -n functional-803727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh -n functional-803727 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-803727 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-wzvsl" [614bea94-0240-4a44-829a-e49f03715f4e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-wzvsl" [614bea94-0240-4a44-829a-e49f03715f4e] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.209880137s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-803727 exec mysql-5bb876957f-wzvsl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-803727 exec mysql-5bb876957f-wzvsl -- mysql -ppassword -e "show databases;": exit status 1 (210.598302ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 02:50:54.716284  189749 retry.go:31] will retry after 1.300979927s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-803727 exec mysql-5bb876957f-wzvsl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-803727 exec mysql-5bb876957f-wzvsl -- mysql -ppassword -e "show databases;": exit status 1 (149.993376ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 02:50:56.168347  189749 retry.go:31] will retry after 1.693004351s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-803727 exec mysql-5bb876957f-wzvsl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/189749/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo cat /etc/test/nested/copy/189749/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/189749.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo cat /etc/ssl/certs/189749.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/189749.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo cat /usr/share/ca-certificates/189749.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1897492.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo cat /etc/ssl/certs/1897492.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1897492.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo cat /usr/share/ca-certificates/1897492.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-803727 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-803727 ssh "sudo systemctl is-active docker": exit status 1 (292.411144ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-803727 ssh "sudo systemctl is-active containerd": exit status 1 (237.238886ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-803727 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-803727 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-dfztd" [387195a9-978f-48a7-a99a-02feffc40fad] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-dfztd" [387195a9-978f-48a7-a99a-02feffc40fad] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.010886386s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "245.868977ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.327143ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "248.487206ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "68.604664ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdany-port3057283035/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763952621744446414" to /tmp/TestFunctionalparallelMountCmdany-port3057283035/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763952621744446414" to /tmp/TestFunctionalparallelMountCmdany-port3057283035/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763952621744446414" to /tmp/TestFunctionalparallelMountCmdany-port3057283035/001/test-1763952621744446414
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (170.549355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 02:50:21.915413  189749 retry.go:31] will retry after 700.657843ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 02:50 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 02:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 02:50 test-1763952621744446414
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh cat /mount-9p/test-1763952621744446414
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-803727 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [256a5589-fee4-44c2-be96-eeb59a303dac] Pending
helpers_test.go:352: "busybox-mount" [256a5589-fee4-44c2-be96-eeb59a303dac] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [256a5589-fee4-44c2-be96-eeb59a303dac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [256a5589-fee4-44c2-be96-eeb59a303dac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003461752s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-803727 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdany-port3057283035/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 service list -o json
functional_test.go:1504: Took "230.408449ms" to run "out/minikube-linux-amd64 -p functional-803727 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.230:31088
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdspecific-port1688896148/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.607718ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 02:50:30.284710  189749 retry.go:31] will retry after 532.608467ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdspecific-port1688896148/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-803727 ssh "sudo umount -f /mount-9p": exit status 1 (182.201241ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-803727 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdspecific-port1688896148/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.230:31088
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-803727 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-803727
localhost/kicbase/echo-server:functional-803727
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-803727 image ls --format short --alsologtostderr:
I1124 02:50:48.910844  196429 out.go:360] Setting OutFile to fd 1 ...
I1124 02:50:48.911112  196429 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:50:48.911122  196429 out.go:374] Setting ErrFile to fd 2...
I1124 02:50:48.911127  196429 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:50:48.911325  196429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
I1124 02:50:48.911910  196429 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:50:48.912007  196429 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:50:48.913968  196429 ssh_runner.go:195] Run: systemctl --version
I1124 02:50:48.916020  196429 main.go:143] libmachine: domain functional-803727 has defined MAC address 52:54:00:23:78:7f in network mk-functional-803727
I1124 02:50:48.916445  196429 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:78:7f", ip: ""} in network mk-functional-803727: {Iface:virbr1 ExpiryTime:2025-11-24 03:47:45 +0000 UTC Type:0 Mac:52:54:00:23:78:7f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-803727 Clientid:01:52:54:00:23:78:7f}
I1124 02:50:48.916483  196429 main.go:143] libmachine: domain functional-803727 has defined IP address 192.168.39.230 and MAC address 52:54:00:23:78:7f in network mk-functional-803727
I1124 02:50:48.916622  196429 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/functional-803727/id_rsa Username:docker}
I1124 02:50:48.997388  196429 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-803727 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-803727  │ 14c19b46d037b │ 3.33kB │
│ localhost/my-image                      │ functional-803727  │ e394199d5fa44 │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-803727  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-803727 image ls --format table --alsologtostderr:
I1124 02:50:56.398325  196547 out.go:360] Setting OutFile to fd 1 ...
I1124 02:50:56.398471  196547 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:50:56.398481  196547 out.go:374] Setting ErrFile to fd 2...
I1124 02:50:56.398486  196547 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:50:56.398699  196547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
I1124 02:50:56.399303  196547 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:50:56.399437  196547 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:50:56.401944  196547 ssh_runner.go:195] Run: systemctl --version
I1124 02:50:56.403939  196547 main.go:143] libmachine: domain functional-803727 has defined MAC address 52:54:00:23:78:7f in network mk-functional-803727
I1124 02:50:56.404315  196547 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:78:7f", ip: ""} in network mk-functional-803727: {Iface:virbr1 ExpiryTime:2025-11-24 03:47:45 +0000 UTC Type:0 Mac:52:54:00:23:78:7f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-803727 Clientid:01:52:54:00:23:78:7f}
I1124 02:50:56.404337  196547 main.go:143] libmachine: domain functional-803727 has defined IP address 192.168.39.230 and MAC address 52:54:00:23:78:7f in network mk-functional-803727
I1124 02:50:56.404512  196547 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/functional-803727/id_rsa Username:docker}
I1124 02:50:56.486199  196547 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-803727 image ls --format json --alsologtostderr:
[{"id":"e394199d5fa440dfb604b82d139c02ffcd1129d4ad0b0d0faefd7d0d29f0067c","repoDigests":["localhost/my-image@sha256:44ed56e962c6ca630c306fe4689d1856e1a80c1d3b8641d3b92807c606704215"],"repoTags":["localhost/my-image:functional-803727"],"size":"1468599"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c5
9475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-803727"],"size":"4944818"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cd
c2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe
8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b446173036
94fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags
":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9
e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"54f89afa084e68d10a744db3cbb8d0d4c02d1910377010db7432c4bcc464091f","repoDigests":["docker.io/library/c0532b5c1d015a928c3627f09d5cd52c7d09285453f4282bde8016e6b7f1c364-tmp@sha256:569a6e4bfc250e2f7dc5463e723c47be24182a9aa3962cc8d20949a01ecf6923"],"repoTags":[],"size":"1466018"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-pro
visioner:v5"],"size":"31470524"},{"id":"14c19b46d037bdc1b21eda0b8f42363bbc559ef37f6f06376af5195a322c2452","repoDigests":["localhost/minikube-local-cache-test@sha256:0c65e9eb0a6357e96c87e72e9c6e5523ed56f7f67215df91f7da61f2b4bb6e56"],"repoTags":["localhost/minikube-local-cache-test:functional-803727"],"size":"3330"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-803727 image ls --format json --alsologtostderr:
I1124 02:50:56.210765  196537 out.go:360] Setting OutFile to fd 1 ...
I1124 02:50:56.210888  196537 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:50:56.210898  196537 out.go:374] Setting ErrFile to fd 2...
I1124 02:50:56.210903  196537 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:50:56.211139  196537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
I1124 02:50:56.211713  196537 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:50:56.211831  196537 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:50:56.214151  196537 ssh_runner.go:195] Run: systemctl --version
I1124 02:50:56.216620  196537 main.go:143] libmachine: domain functional-803727 has defined MAC address 52:54:00:23:78:7f in network mk-functional-803727
I1124 02:50:56.217017  196537 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:78:7f", ip: ""} in network mk-functional-803727: {Iface:virbr1 ExpiryTime:2025-11-24 03:47:45 +0000 UTC Type:0 Mac:52:54:00:23:78:7f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-803727 Clientid:01:52:54:00:23:78:7f}
I1124 02:50:56.217050  196537 main.go:143] libmachine: domain functional-803727 has defined IP address 192.168.39.230 and MAC address 52:54:00:23:78:7f in network mk-functional-803727
I1124 02:50:56.217188  196537 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/functional-803727/id_rsa Username:docker}
I1124 02:50:56.298503  196537 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-803727 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-803727
size: "4944818"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 14c19b46d037bdc1b21eda0b8f42363bbc559ef37f6f06376af5195a322c2452
repoDigests:
- localhost/minikube-local-cache-test@sha256:0c65e9eb0a6357e96c87e72e9c6e5523ed56f7f67215df91f7da61f2b4bb6e56
repoTags:
- localhost/minikube-local-cache-test:functional-803727
size: "3330"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-803727 image ls --format yaml --alsologtostderr:
I1124 02:50:49.141160  196440 out.go:360] Setting OutFile to fd 1 ...
I1124 02:50:49.141447  196440 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:50:49.141456  196440 out.go:374] Setting ErrFile to fd 2...
I1124 02:50:49.141459  196440 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:50:49.141689  196440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
I1124 02:50:49.142318  196440 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:50:49.142455  196440 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:50:49.144796  196440 ssh_runner.go:195] Run: systemctl --version
I1124 02:50:49.147562  196440 main.go:143] libmachine: domain functional-803727 has defined MAC address 52:54:00:23:78:7f in network mk-functional-803727
I1124 02:50:49.148058  196440 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:78:7f", ip: ""} in network mk-functional-803727: {Iface:virbr1 ExpiryTime:2025-11-24 03:47:45 +0000 UTC Type:0 Mac:52:54:00:23:78:7f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-803727 Clientid:01:52:54:00:23:78:7f}
I1124 02:50:49.148105  196440 main.go:143] libmachine: domain functional-803727 has defined IP address 192.168.39.230 and MAC address 52:54:00:23:78:7f in network mk-functional-803727
I1124 02:50:49.148259  196440 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/functional-803727/id_rsa Username:docker}
I1124 02:50:49.237007  196440 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-803727 ssh pgrep buildkitd: exit status 1 (170.824035ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image build -t localhost/my-image:functional-803727 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 image build -t localhost/my-image:functional-803727 testdata/build --alsologtostderr: (6.477402456s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-803727 image build -t localhost/my-image:functional-803727 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 54f89afa084
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-803727
--> e394199d5fa
Successfully tagged localhost/my-image:functional-803727
e394199d5fa440dfb604b82d139c02ffcd1129d4ad0b0d0faefd7d0d29f0067c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-803727 image build -t localhost/my-image:functional-803727 testdata/build --alsologtostderr:
I1124 02:50:49.509757  196477 out.go:360] Setting OutFile to fd 1 ...
I1124 02:50:49.510570  196477 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:50:49.510582  196477 out.go:374] Setting ErrFile to fd 2...
I1124 02:50:49.510587  196477 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:50:49.510815  196477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
I1124 02:50:49.511347  196477 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:50:49.512128  196477 config.go:182] Loaded profile config "functional-803727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:50:49.514442  196477 ssh_runner.go:195] Run: systemctl --version
I1124 02:50:49.516831  196477 main.go:143] libmachine: domain functional-803727 has defined MAC address 52:54:00:23:78:7f in network mk-functional-803727
I1124 02:50:49.517293  196477 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:78:7f", ip: ""} in network mk-functional-803727: {Iface:virbr1 ExpiryTime:2025-11-24 03:47:45 +0000 UTC Type:0 Mac:52:54:00:23:78:7f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-803727 Clientid:01:52:54:00:23:78:7f}
I1124 02:50:49.517326  196477 main.go:143] libmachine: domain functional-803727 has defined IP address 192.168.39.230 and MAC address 52:54:00:23:78:7f in network mk-functional-803727
I1124 02:50:49.517534  196477 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/functional-803727/id_rsa Username:docker}
I1124 02:50:49.616557  196477 build_images.go:162] Building image from path: /tmp/build.3936985153.tar
I1124 02:50:49.616640  196477 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 02:50:49.629112  196477 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3936985153.tar
I1124 02:50:49.634209  196477 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3936985153.tar: stat -c "%s %y" /var/lib/minikube/build/build.3936985153.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3936985153.tar': No such file or directory
I1124 02:50:49.634254  196477 ssh_runner.go:362] scp /tmp/build.3936985153.tar --> /var/lib/minikube/build/build.3936985153.tar (3072 bytes)
I1124 02:50:49.678528  196477 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3936985153
I1124 02:50:49.691790  196477 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3936985153 -xf /var/lib/minikube/build/build.3936985153.tar
I1124 02:50:49.703327  196477 crio.go:315] Building image: /var/lib/minikube/build/build.3936985153
I1124 02:50:49.703449  196477 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-803727 /var/lib/minikube/build/build.3936985153 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1124 02:50:55.895339  196477 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-803727 /var/lib/minikube/build/build.3936985153 --cgroup-manager=cgroupfs: (6.191834872s)
I1124 02:50:55.895449  196477 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3936985153
I1124 02:50:55.909466  196477 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3936985153.tar
I1124 02:50:55.922652  196477 build_images.go:218] Built localhost/my-image:functional-803727 from /tmp/build.3936985153.tar
I1124 02:50:55.922690  196477 build_images.go:134] succeeded building to: functional-803727
I1124 02:50:55.922697  196477 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.925451429s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-803727
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913345601/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913345601/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913345601/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T" /mount1: exit status 1 (255.528159ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 02:50:31.845095  189749 retry.go:31] will retry after 303.222775ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-803727 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913345601/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913345601/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-803727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913345601/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image load --daemon kicbase/echo-server:functional-803727 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 image load --daemon kicbase/echo-server:functional-803727 --alsologtostderr: (2.716269398s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image load --daemon kicbase/echo-server:functional-803727 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-803727
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image load --daemon kicbase/echo-server:functional-803727 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image save kicbase/echo-server:functional-803727 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image rm kicbase/echo-server:functional-803727 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 image rm kicbase/echo-server:functional-803727 --alsologtostderr: (2.126023429s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.042738367s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-803727
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-803727 image save --daemon kicbase/echo-server:functional-803727 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-803727 image save --daemon kicbase/echo-server:functional-803727 --alsologtostderr: (3.579875299s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-803727
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.62s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-803727
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-803727
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-803727
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1124 02:51:13.205159  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m16.00486325s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (196.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 kubectl -- rollout status deployment/busybox: (5.072149521s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-9wvg4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-q5knq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-tzcx2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-9wvg4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-q5knq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-tzcx2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-9wvg4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-q5knq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-tzcx2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-9wvg4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-9wvg4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-q5knq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-q5knq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-tzcx2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 kubectl -- exec busybox-7b57f96db7-tzcx2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (42.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 node add --alsologtostderr -v 5: (42.060084025s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (42.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-739536 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp testdata/cp-test.txt ha-739536:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2191197352/001/cp-test_ha-739536.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536:/home/docker/cp-test.txt ha-739536-m02:/home/docker/cp-test_ha-739536_ha-739536-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m02 "sudo cat /home/docker/cp-test_ha-739536_ha-739536-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536:/home/docker/cp-test.txt ha-739536-m03:/home/docker/cp-test_ha-739536_ha-739536-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m03 "sudo cat /home/docker/cp-test_ha-739536_ha-739536-m03.txt"
E1124 02:55:19.099302  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:55:19.105821  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:55:19.117348  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:55:19.139667  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536:/home/docker/cp-test.txt ha-739536-m04:/home/docker/cp-test_ha-739536_ha-739536-m04.txt
E1124 02:55:19.181513  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:55:19.263006  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536 "sudo cat /home/docker/cp-test.txt"
E1124 02:55:19.424590  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m04 "sudo cat /home/docker/cp-test_ha-739536_ha-739536-m04.txt"
E1124 02:55:19.747235  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp testdata/cp-test.txt ha-739536-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2191197352/001/cp-test_ha-739536-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m02 "sudo cat /home/docker/cp-test.txt"
E1124 02:55:20.389542  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m02:/home/docker/cp-test.txt ha-739536:/home/docker/cp-test_ha-739536-m02_ha-739536.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536 "sudo cat /home/docker/cp-test_ha-739536-m02_ha-739536.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m02:/home/docker/cp-test.txt ha-739536-m03:/home/docker/cp-test_ha-739536-m02_ha-739536-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m03 "sudo cat /home/docker/cp-test_ha-739536-m02_ha-739536-m03.txt"
E1124 02:55:21.671886  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m02:/home/docker/cp-test.txt ha-739536-m04:/home/docker/cp-test_ha-739536-m02_ha-739536-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m04 "sudo cat /home/docker/cp-test_ha-739536-m02_ha-739536-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp testdata/cp-test.txt ha-739536-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2191197352/001/cp-test_ha-739536-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m03:/home/docker/cp-test.txt ha-739536:/home/docker/cp-test_ha-739536-m03_ha-739536.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536 "sudo cat /home/docker/cp-test_ha-739536-m03_ha-739536.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m03:/home/docker/cp-test.txt ha-739536-m02:/home/docker/cp-test_ha-739536-m03_ha-739536-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m02 "sudo cat /home/docker/cp-test_ha-739536-m03_ha-739536-m02.txt"
E1124 02:55:24.233496  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m03:/home/docker/cp-test.txt ha-739536-m04:/home/docker/cp-test_ha-739536-m03_ha-739536-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m04 "sudo cat /home/docker/cp-test_ha-739536-m03_ha-739536-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp testdata/cp-test.txt ha-739536-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2191197352/001/cp-test_ha-739536-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m04:/home/docker/cp-test.txt ha-739536:/home/docker/cp-test_ha-739536-m04_ha-739536.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536 "sudo cat /home/docker/cp-test_ha-739536-m04_ha-739536.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m04:/home/docker/cp-test.txt ha-739536-m02:/home/docker/cp-test_ha-739536-m04_ha-739536-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m02 "sudo cat /home/docker/cp-test_ha-739536-m04_ha-739536-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 cp ha-739536-m04:/home/docker/cp-test.txt ha-739536-m03:/home/docker/cp-test_ha-739536-m04_ha-739536-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 ssh -n ha-739536-m03 "sudo cat /home/docker/cp-test_ha-739536-m04_ha-739536-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (71.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 node stop m02 --alsologtostderr -v 5
E1124 02:55:29.355404  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:55:39.597164  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:55:45.497620  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:56:00.079347  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 node stop m02 --alsologtostderr -v 5: (1m11.011594271s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-739536 status --alsologtostderr -v 5: exit status 7 (510.561887ms)

                                                
                                                
-- stdout --
	ha-739536
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-739536-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-739536-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-739536-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:56:38.684868  199496 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:56:38.684980  199496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:56:38.684985  199496 out.go:374] Setting ErrFile to fd 2...
	I1124 02:56:38.684990  199496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:56:38.685227  199496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 02:56:38.685435  199496 out.go:368] Setting JSON to false
	I1124 02:56:38.685466  199496 mustload.go:66] Loading cluster: ha-739536
	I1124 02:56:38.685577  199496 notify.go:221] Checking for updates...
	I1124 02:56:38.685897  199496 config.go:182] Loaded profile config "ha-739536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:56:38.685927  199496 status.go:174] checking status of ha-739536 ...
	I1124 02:56:38.688115  199496 status.go:371] ha-739536 host status = "Running" (err=<nil>)
	I1124 02:56:38.688133  199496 host.go:66] Checking if "ha-739536" exists ...
	I1124 02:56:38.690851  199496 main.go:143] libmachine: domain ha-739536 has defined MAC address 52:54:00:8d:3d:29 in network mk-ha-739536
	I1124 02:56:38.691333  199496 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3d:29", ip: ""} in network mk-ha-739536: {Iface:virbr1 ExpiryTime:2025-11-24 03:51:22 +0000 UTC Type:0 Mac:52:54:00:8d:3d:29 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-739536 Clientid:01:52:54:00:8d:3d:29}
	I1124 02:56:38.691364  199496 main.go:143] libmachine: domain ha-739536 has defined IP address 192.168.39.145 and MAC address 52:54:00:8d:3d:29 in network mk-ha-739536
	I1124 02:56:38.691566  199496 host.go:66] Checking if "ha-739536" exists ...
	I1124 02:56:38.691760  199496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:56:38.694149  199496 main.go:143] libmachine: domain ha-739536 has defined MAC address 52:54:00:8d:3d:29 in network mk-ha-739536
	I1124 02:56:38.694667  199496 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3d:29", ip: ""} in network mk-ha-739536: {Iface:virbr1 ExpiryTime:2025-11-24 03:51:22 +0000 UTC Type:0 Mac:52:54:00:8d:3d:29 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-739536 Clientid:01:52:54:00:8d:3d:29}
	I1124 02:56:38.694700  199496 main.go:143] libmachine: domain ha-739536 has defined IP address 192.168.39.145 and MAC address 52:54:00:8d:3d:29 in network mk-ha-739536
	I1124 02:56:38.694857  199496 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/ha-739536/id_rsa Username:docker}
	I1124 02:56:38.786432  199496 ssh_runner.go:195] Run: systemctl --version
	I1124 02:56:38.792845  199496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:56:38.811839  199496 kubeconfig.go:125] found "ha-739536" server: "https://192.168.39.254:8443"
	I1124 02:56:38.811884  199496 api_server.go:166] Checking apiserver status ...
	I1124 02:56:38.811933  199496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:56:38.835632  199496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W1124 02:56:38.848605  199496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 02:56:38.848674  199496 ssh_runner.go:195] Run: ls
	I1124 02:56:38.853879  199496 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1124 02:56:38.859579  199496 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1124 02:56:38.859610  199496 status.go:463] ha-739536 apiserver status = Running (err=<nil>)
	I1124 02:56:38.859622  199496 status.go:176] ha-739536 status: &{Name:ha-739536 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:56:38.859645  199496 status.go:174] checking status of ha-739536-m02 ...
	I1124 02:56:38.861463  199496 status.go:371] ha-739536-m02 host status = "Stopped" (err=<nil>)
	I1124 02:56:38.861489  199496 status.go:384] host is not running, skipping remaining checks
	I1124 02:56:38.861496  199496 status.go:176] ha-739536-m02 status: &{Name:ha-739536-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:56:38.861516  199496 status.go:174] checking status of ha-739536-m03 ...
	I1124 02:56:38.862963  199496 status.go:371] ha-739536-m03 host status = "Running" (err=<nil>)
	I1124 02:56:38.862983  199496 host.go:66] Checking if "ha-739536-m03" exists ...
	I1124 02:56:38.866175  199496 main.go:143] libmachine: domain ha-739536-m03 has defined MAC address 52:54:00:42:06:36 in network mk-ha-739536
	I1124 02:56:38.866687  199496 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:06:36", ip: ""} in network mk-ha-739536: {Iface:virbr1 ExpiryTime:2025-11-24 03:53:18 +0000 UTC Type:0 Mac:52:54:00:42:06:36 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ha-739536-m03 Clientid:01:52:54:00:42:06:36}
	I1124 02:56:38.866729  199496 main.go:143] libmachine: domain ha-739536-m03 has defined IP address 192.168.39.34 and MAC address 52:54:00:42:06:36 in network mk-ha-739536
	I1124 02:56:38.866900  199496 host.go:66] Checking if "ha-739536-m03" exists ...
	I1124 02:56:38.867107  199496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:56:38.869344  199496 main.go:143] libmachine: domain ha-739536-m03 has defined MAC address 52:54:00:42:06:36 in network mk-ha-739536
	I1124 02:56:38.869775  199496 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:06:36", ip: ""} in network mk-ha-739536: {Iface:virbr1 ExpiryTime:2025-11-24 03:53:18 +0000 UTC Type:0 Mac:52:54:00:42:06:36 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ha-739536-m03 Clientid:01:52:54:00:42:06:36}
	I1124 02:56:38.869803  199496 main.go:143] libmachine: domain ha-739536-m03 has defined IP address 192.168.39.34 and MAC address 52:54:00:42:06:36 in network mk-ha-739536
	I1124 02:56:38.869971  199496 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/ha-739536-m03/id_rsa Username:docker}
	I1124 02:56:38.958536  199496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:56:38.975910  199496 kubeconfig.go:125] found "ha-739536" server: "https://192.168.39.254:8443"
	I1124 02:56:38.975943  199496 api_server.go:166] Checking apiserver status ...
	I1124 02:56:38.975993  199496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:56:38.995695  199496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1737/cgroup
	W1124 02:56:39.008492  199496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1737/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 02:56:39.008554  199496 ssh_runner.go:195] Run: ls
	I1124 02:56:39.014777  199496 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1124 02:56:39.021584  199496 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1124 02:56:39.021622  199496 status.go:463] ha-739536-m03 apiserver status = Running (err=<nil>)
	I1124 02:56:39.021635  199496 status.go:176] ha-739536-m03 status: &{Name:ha-739536-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:56:39.021656  199496 status.go:174] checking status of ha-739536-m04 ...
	I1124 02:56:39.023460  199496 status.go:371] ha-739536-m04 host status = "Running" (err=<nil>)
	I1124 02:56:39.023498  199496 host.go:66] Checking if "ha-739536-m04" exists ...
	I1124 02:56:39.026314  199496 main.go:143] libmachine: domain ha-739536-m04 has defined MAC address 52:54:00:59:60:c1 in network mk-ha-739536
	I1124 02:56:39.026750  199496 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:59:60:c1", ip: ""} in network mk-ha-739536: {Iface:virbr1 ExpiryTime:2025-11-24 03:54:48 +0000 UTC Type:0 Mac:52:54:00:59:60:c1 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-739536-m04 Clientid:01:52:54:00:59:60:c1}
	I1124 02:56:39.026780  199496 main.go:143] libmachine: domain ha-739536-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:59:60:c1 in network mk-ha-739536
	I1124 02:56:39.026911  199496 host.go:66] Checking if "ha-739536-m04" exists ...
	I1124 02:56:39.027113  199496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:56:39.029210  199496 main.go:143] libmachine: domain ha-739536-m04 has defined MAC address 52:54:00:59:60:c1 in network mk-ha-739536
	I1124 02:56:39.029666  199496 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:59:60:c1", ip: ""} in network mk-ha-739536: {Iface:virbr1 ExpiryTime:2025-11-24 03:54:48 +0000 UTC Type:0 Mac:52:54:00:59:60:c1 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-739536-m04 Clientid:01:52:54:00:59:60:c1}
	I1124 02:56:39.029689  199496 main.go:143] libmachine: domain ha-739536-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:59:60:c1 in network mk-ha-739536
	I1124 02:56:39.029852  199496 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/ha-739536-m04/id_rsa Username:docker}
	I1124 02:56:39.109614  199496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:56:39.128145  199496 status.go:176] ha-739536-m04 status: &{Name:ha-739536-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (71.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 node start m02 --alsologtostderr -v 5
E1124 02:56:41.041454  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 node start m02 --alsologtostderr -v 5: (36.815005872s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 stop --alsologtostderr -v 5
E1124 02:58:02.966253  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:00:19.101856  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:00:45.499491  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:00:46.808196  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 stop --alsologtostderr -v 5: (4m7.917657656s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 start --wait true --alsologtostderr -v 5
E1124 03:02:08.568239  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 start --wait true --alsologtostderr -v 5: (1m58.870197782s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 node delete m03 --alsologtostderr -v 5: (17.321870331s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (243.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 stop --alsologtostderr -v 5
E1124 03:05:19.099891  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:05:45.499731  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 stop --alsologtostderr -v 5: (4m3.873325103s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-739536 status --alsologtostderr -v 5: exit status 7 (70.625436ms)

                                                
                                                
-- stdout --
	ha-739536
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-739536-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-739536-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:07:47.426213  202790 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:07:47.426389  202790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:07:47.426399  202790 out.go:374] Setting ErrFile to fd 2...
	I1124 03:07:47.426406  202790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:07:47.426632  202790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:07:47.426863  202790 out.go:368] Setting JSON to false
	I1124 03:07:47.426907  202790 mustload.go:66] Loading cluster: ha-739536
	I1124 03:07:47.427067  202790 notify.go:221] Checking for updates...
	I1124 03:07:47.427513  202790 config.go:182] Loaded profile config "ha-739536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:07:47.427541  202790 status.go:174] checking status of ha-739536 ...
	I1124 03:07:47.429778  202790 status.go:371] ha-739536 host status = "Stopped" (err=<nil>)
	I1124 03:07:47.429800  202790 status.go:384] host is not running, skipping remaining checks
	I1124 03:07:47.429807  202790 status.go:176] ha-739536 status: &{Name:ha-739536 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:07:47.429829  202790 status.go:174] checking status of ha-739536-m02 ...
	I1124 03:07:47.431160  202790 status.go:371] ha-739536-m02 host status = "Stopped" (err=<nil>)
	I1124 03:07:47.431178  202790 status.go:384] host is not running, skipping remaining checks
	I1124 03:07:47.431185  202790 status.go:176] ha-739536-m02 status: &{Name:ha-739536-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:07:47.431200  202790 status.go:174] checking status of ha-739536-m04 ...
	I1124 03:07:47.432535  202790 status.go:371] ha-739536-m04 host status = "Stopped" (err=<nil>)
	I1124 03:07:47.432553  202790 status.go:384] host is not running, skipping remaining checks
	I1124 03:07:47.432560  202790 status.go:176] ha-739536-m04 status: &{Name:ha-739536-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (243.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (95.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m35.124626402s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (95.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (101.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 node add --control-plane --alsologtostderr -v 5
E1124 03:10:19.099550  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:10:45.497910  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-739536 node add --control-plane --alsologtostderr -v 5: (1m40.452793706s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-739536 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (101.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-484433 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1124 03:11:42.171761  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-484433 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m19.601394091s)
--- PASS: TestJSONOutput/start/Command (79.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-484433 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-484433 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-484433 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-484433 --output=json --user=testUser: (7.103053666s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-767640 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-767640 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (82.983497ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5ea69a4a-50a6-4a78-bb64-ecb2ed1b5a97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-767640] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2f92969-ced6-4174-852e-a21e1f97b049","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21975"}}
	{"specversion":"1.0","id":"c6f9ff18-23e3-48dd-9816-25b793ade415","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ecc8339f-6f64-4601-9a03-571492b2b67b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig"}}
	{"specversion":"1.0","id":"a82d8ea9-0a0e-4c1b-bfa2-cbd5905b9e3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube"}}
	{"specversion":"1.0","id":"a0d3a0e6-cab6-45b6-a9ab-8cd6b94d8773","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"18ac9f9d-7add-4421-bcdc-b65a592137f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6111c3c8-dba7-459f-b2ff-37566ee81a8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-767640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-767640
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (79.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-278491 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-278491 --driver=kvm2  --container-runtime=crio: (37.811180494s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-281526 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-281526 --driver=kvm2  --container-runtime=crio: (38.675967726s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-278491
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-281526
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-281526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-281526
helpers_test.go:175: Cleaning up "first-278491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-278491
--- PASS: TestMinikubeProfile (79.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-983561 --memory=3072 --mount-string /tmp/TestMountStartserial3582341234/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-983561 --memory=3072 --mount-string /tmp/TestMountStartserial3582341234/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.217721889s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-983561 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-983561 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-003685 --memory=3072 --mount-string /tmp/TestMountStartserial3582341234/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-003685 --memory=3072 --mount-string /tmp/TestMountStartserial3582341234/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.355469761s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-003685 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-003685 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-983561 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-003685 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-003685 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-003685
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-003685: (1.358745736s)
--- PASS: TestMountStart/serial/Stop (1.36s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.78s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-003685
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-003685: (17.783669494s)
--- PASS: TestMountStart/serial/RestartStopped (18.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-003685 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-003685 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-615187 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1124 03:15:19.099449  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:15:45.498088  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-615187 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m38.9597323s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-615187 -- rollout status deployment/busybox: (4.740239246s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- exec busybox-7b57f96db7-5fs57 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- exec busybox-7b57f96db7-xnvst -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- exec busybox-7b57f96db7-5fs57 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- exec busybox-7b57f96db7-xnvst -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- exec busybox-7b57f96db7-5fs57 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- exec busybox-7b57f96db7-xnvst -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.47s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- exec busybox-7b57f96db7-5fs57 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- exec busybox-7b57f96db7-5fs57 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- exec busybox-7b57f96db7-xnvst -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615187 -- exec busybox-7b57f96db7-xnvst -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-615187 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-615187 -v=5 --alsologtostderr: (44.138972319s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-615187 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp testdata/cp-test.txt multinode-615187:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp multinode-615187:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1260289235/001/cp-test_multinode-615187.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp multinode-615187:/home/docker/cp-test.txt multinode-615187-m02:/home/docker/cp-test_multinode-615187_multinode-615187-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m02 "sudo cat /home/docker/cp-test_multinode-615187_multinode-615187-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp multinode-615187:/home/docker/cp-test.txt multinode-615187-m03:/home/docker/cp-test_multinode-615187_multinode-615187-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m03 "sudo cat /home/docker/cp-test_multinode-615187_multinode-615187-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp testdata/cp-test.txt multinode-615187-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp multinode-615187-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1260289235/001/cp-test_multinode-615187-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp multinode-615187-m02:/home/docker/cp-test.txt multinode-615187:/home/docker/cp-test_multinode-615187-m02_multinode-615187.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187 "sudo cat /home/docker/cp-test_multinode-615187-m02_multinode-615187.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp multinode-615187-m02:/home/docker/cp-test.txt multinode-615187-m03:/home/docker/cp-test_multinode-615187-m02_multinode-615187-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m03 "sudo cat /home/docker/cp-test_multinode-615187-m02_multinode-615187-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp testdata/cp-test.txt multinode-615187-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp multinode-615187-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1260289235/001/cp-test_multinode-615187-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp multinode-615187-m03:/home/docker/cp-test.txt multinode-615187:/home/docker/cp-test_multinode-615187-m03_multinode-615187.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187 "sudo cat /home/docker/cp-test_multinode-615187-m03_multinode-615187.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 cp multinode-615187-m03:/home/docker/cp-test.txt multinode-615187-m02:/home/docker/cp-test_multinode-615187-m03_multinode-615187-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 ssh -n multinode-615187-m02 "sudo cat /home/docker/cp-test_multinode-615187-m03_multinode-615187-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-615187 node stop m03: (1.609554186s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-615187 status: exit status 7 (340.405504ms)

                                                
                                                
-- stdout --
	multinode-615187
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-615187-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-615187-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-615187 status --alsologtostderr: exit status 7 (344.601027ms)

                                                
                                                
-- stdout --
	multinode-615187
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-615187-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-615187-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:17:41.631528  208433 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:17:41.631809  208433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:41.631818  208433 out.go:374] Setting ErrFile to fd 2...
	I1124 03:17:41.631823  208433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:41.632030  208433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:17:41.632200  208433 out.go:368] Setting JSON to false
	I1124 03:17:41.632234  208433 mustload.go:66] Loading cluster: multinode-615187
	I1124 03:17:41.632311  208433 notify.go:221] Checking for updates...
	I1124 03:17:41.632734  208433 config.go:182] Loaded profile config "multinode-615187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:41.632761  208433 status.go:174] checking status of multinode-615187 ...
	I1124 03:17:41.635059  208433 status.go:371] multinode-615187 host status = "Running" (err=<nil>)
	I1124 03:17:41.635082  208433 host.go:66] Checking if "multinode-615187" exists ...
	I1124 03:17:41.638066  208433 main.go:143] libmachine: domain multinode-615187 has defined MAC address 52:54:00:dd:5f:a0 in network mk-multinode-615187
	I1124 03:17:41.638570  208433 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:5f:a0", ip: ""} in network mk-multinode-615187: {Iface:virbr1 ExpiryTime:2025-11-24 04:15:16 +0000 UTC Type:0 Mac:52:54:00:dd:5f:a0 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-615187 Clientid:01:52:54:00:dd:5f:a0}
	I1124 03:17:41.638599  208433 main.go:143] libmachine: domain multinode-615187 has defined IP address 192.168.39.178 and MAC address 52:54:00:dd:5f:a0 in network mk-multinode-615187
	I1124 03:17:41.638783  208433 host.go:66] Checking if "multinode-615187" exists ...
	I1124 03:17:41.639040  208433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:17:41.641602  208433 main.go:143] libmachine: domain multinode-615187 has defined MAC address 52:54:00:dd:5f:a0 in network mk-multinode-615187
	I1124 03:17:41.642061  208433 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:5f:a0", ip: ""} in network mk-multinode-615187: {Iface:virbr1 ExpiryTime:2025-11-24 04:15:16 +0000 UTC Type:0 Mac:52:54:00:dd:5f:a0 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-615187 Clientid:01:52:54:00:dd:5f:a0}
	I1124 03:17:41.642086  208433 main.go:143] libmachine: domain multinode-615187 has defined IP address 192.168.39.178 and MAC address 52:54:00:dd:5f:a0 in network mk-multinode-615187
	I1124 03:17:41.642284  208433 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/multinode-615187/id_rsa Username:docker}
	I1124 03:17:41.731363  208433 ssh_runner.go:195] Run: systemctl --version
	I1124 03:17:41.738609  208433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:17:41.757530  208433 kubeconfig.go:125] found "multinode-615187" server: "https://192.168.39.178:8443"
	I1124 03:17:41.757572  208433 api_server.go:166] Checking apiserver status ...
	I1124 03:17:41.757609  208433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:17:41.777593  208433 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup
	W1124 03:17:41.789937  208433 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:17:41.790013  208433 ssh_runner.go:195] Run: ls
	I1124 03:17:41.795187  208433 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I1124 03:17:41.799734  208433 api_server.go:279] https://192.168.39.178:8443/healthz returned 200:
	ok
	I1124 03:17:41.799767  208433 status.go:463] multinode-615187 apiserver status = Running (err=<nil>)
	I1124 03:17:41.799780  208433 status.go:176] multinode-615187 status: &{Name:multinode-615187 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:17:41.799802  208433 status.go:174] checking status of multinode-615187-m02 ...
	I1124 03:17:41.801351  208433 status.go:371] multinode-615187-m02 host status = "Running" (err=<nil>)
	I1124 03:17:41.801383  208433 host.go:66] Checking if "multinode-615187-m02" exists ...
	I1124 03:17:41.803499  208433 main.go:143] libmachine: domain multinode-615187-m02 has defined MAC address 52:54:00:7d:cd:47 in network mk-multinode-615187
	I1124 03:17:41.803921  208433 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7d:cd:47", ip: ""} in network mk-multinode-615187: {Iface:virbr1 ExpiryTime:2025-11-24 04:16:09 +0000 UTC Type:0 Mac:52:54:00:7d:cd:47 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-615187-m02 Clientid:01:52:54:00:7d:cd:47}
	I1124 03:17:41.803951  208433 main.go:143] libmachine: domain multinode-615187-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:7d:cd:47 in network mk-multinode-615187
	I1124 03:17:41.804082  208433 host.go:66] Checking if "multinode-615187-m02" exists ...
	I1124 03:17:41.804286  208433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:17:41.806218  208433 main.go:143] libmachine: domain multinode-615187-m02 has defined MAC address 52:54:00:7d:cd:47 in network mk-multinode-615187
	I1124 03:17:41.806567  208433 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7d:cd:47", ip: ""} in network mk-multinode-615187: {Iface:virbr1 ExpiryTime:2025-11-24 04:16:09 +0000 UTC Type:0 Mac:52:54:00:7d:cd:47 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-615187-m02 Clientid:01:52:54:00:7d:cd:47}
	I1124 03:17:41.806595  208433 main.go:143] libmachine: domain multinode-615187-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:7d:cd:47 in network mk-multinode-615187
	I1124 03:17:41.806723  208433 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21975-185833/.minikube/machines/multinode-615187-m02/id_rsa Username:docker}
	I1124 03:17:41.889135  208433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:17:41.905901  208433 status.go:176] multinode-615187-m02 status: &{Name:multinode-615187-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:17:41.905947  208433 status.go:174] checking status of multinode-615187-m03 ...
	I1124 03:17:41.907755  208433 status.go:371] multinode-615187-m03 host status = "Stopped" (err=<nil>)
	I1124 03:17:41.907776  208433 status.go:384] host is not running, skipping remaining checks
	I1124 03:17:41.907782  208433 status.go:176] multinode-615187-m03 status: &{Name:multinode-615187-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-615187 node start m03 -v=5 --alsologtostderr: (40.785565148s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (290.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-615187
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-615187
E1124 03:18:48.570855  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:20:19.102403  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:20:45.503251  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-615187: (2m41.613168152s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-615187 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-615187 --wait=true -v=5 --alsologtostderr: (2m9.097350893s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-615187
--- PASS: TestMultiNode/serial/RestartKeepsNodes (290.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-615187 node delete m03: (2.162465163s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (156.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 stop
E1124 03:25:19.101714  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:25:45.503172  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-615187 stop: (2m36.459343364s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-615187 status: exit status 7 (69.67195ms)

                                                
                                                
-- stdout --
	multinode-615187
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-615187-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-615187 status --alsologtostderr: exit status 7 (68.733212ms)

                                                
                                                
-- stdout --
	multinode-615187
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-615187-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:25:53.275569  211227 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:25:53.275690  211227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:25:53.275699  211227 out.go:374] Setting ErrFile to fd 2...
	I1124 03:25:53.275703  211227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:25:53.275909  211227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:25:53.276117  211227 out.go:368] Setting JSON to false
	I1124 03:25:53.276152  211227 mustload.go:66] Loading cluster: multinode-615187
	I1124 03:25:53.276226  211227 notify.go:221] Checking for updates...
	I1124 03:25:53.276721  211227 config.go:182] Loaded profile config "multinode-615187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:25:53.276748  211227 status.go:174] checking status of multinode-615187 ...
	I1124 03:25:53.279179  211227 status.go:371] multinode-615187 host status = "Stopped" (err=<nil>)
	I1124 03:25:53.279195  211227 status.go:384] host is not running, skipping remaining checks
	I1124 03:25:53.279201  211227 status.go:176] multinode-615187 status: &{Name:multinode-615187 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:25:53.279218  211227 status.go:174] checking status of multinode-615187-m02 ...
	I1124 03:25:53.280450  211227 status.go:371] multinode-615187-m02 host status = "Stopped" (err=<nil>)
	I1124 03:25:53.280465  211227 status.go:384] host is not running, skipping remaining checks
	I1124 03:25:53.280470  211227 status.go:176] multinode-615187-m02 status: &{Name:multinode-615187-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (156.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-615187 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-615187 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m22.763290732s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615187 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-615187
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-615187-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-615187-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (80.554669ms)

                                                
                                                
-- stdout --
	* [multinode-615187-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-615187-m02' is duplicated with machine name 'multinode-615187-m02' in profile 'multinode-615187'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-615187-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-615187-m03 --driver=kvm2  --container-runtime=crio: (39.211637077s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-615187
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-615187: exit status 80 (196.564406ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-615187 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-615187-m03 already exists in multinode-615187-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-615187-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.42s)

                                                
                                    
x
+
TestScheduledStopUnix (107.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-127118 --memory=3072 --driver=kvm2  --container-runtime=crio
E1124 03:30:45.503808  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-127118 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.551030497s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-127118 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:31:07.793856  213579 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:31:07.794093  213579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:31:07.794104  213579 out.go:374] Setting ErrFile to fd 2...
	I1124 03:31:07.794110  213579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:31:07.794324  213579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:31:07.794657  213579 out.go:368] Setting JSON to false
	I1124 03:31:07.794777  213579 mustload.go:66] Loading cluster: scheduled-stop-127118
	I1124 03:31:07.795227  213579 config.go:182] Loaded profile config "scheduled-stop-127118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:31:07.795335  213579 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/config.json ...
	I1124 03:31:07.795607  213579 mustload.go:66] Loading cluster: scheduled-stop-127118
	I1124 03:31:07.795771  213579 config.go:182] Loaded profile config "scheduled-stop-127118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-127118 -n scheduled-stop-127118
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-127118 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:31:08.079694  213625 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:31:08.079778  213625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:31:08.079782  213625 out.go:374] Setting ErrFile to fd 2...
	I1124 03:31:08.079786  213625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:31:08.079996  213625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:31:08.080219  213625 out.go:368] Setting JSON to false
	I1124 03:31:08.080407  213625 daemonize_unix.go:73] killing process 213614 as it is an old scheduled stop
	I1124 03:31:08.080539  213625 mustload.go:66] Loading cluster: scheduled-stop-127118
	I1124 03:31:08.080879  213625 config.go:182] Loaded profile config "scheduled-stop-127118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:31:08.080945  213625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/config.json ...
	I1124 03:31:08.081113  213625 mustload.go:66] Loading cluster: scheduled-stop-127118
	I1124 03:31:08.081206  213625 config.go:182] Loaded profile config "scheduled-stop-127118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 03:31:08.087791  189749 retry.go:31] will retry after 135.421µs: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.088984  189749 retry.go:31] will retry after 162.928µs: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.090125  189749 retry.go:31] will retry after 294.187µs: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.091273  189749 retry.go:31] will retry after 204.955µs: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.092411  189749 retry.go:31] will retry after 278.632µs: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.093549  189749 retry.go:31] will retry after 950µs: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.094685  189749 retry.go:31] will retry after 693.516µs: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.095811  189749 retry.go:31] will retry after 881.425µs: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.096939  189749 retry.go:31] will retry after 1.409797ms: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.099131  189749 retry.go:31] will retry after 2.360763ms: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.102345  189749 retry.go:31] will retry after 5.711708ms: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.108544  189749 retry.go:31] will retry after 6.392061ms: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.115794  189749 retry.go:31] will retry after 10.443431ms: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.127033  189749 retry.go:31] will retry after 25.626724ms: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.153307  189749 retry.go:31] will retry after 22.723199ms: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
I1124 03:31:08.176553  189749 retry.go:31] will retry after 58.336219ms: open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-127118 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-127118 -n scheduled-stop-127118
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-127118
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-127118 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:31:33.797262  213773 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:31:33.797612  213773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:31:33.797622  213773 out.go:374] Setting ErrFile to fd 2...
	I1124 03:31:33.797626  213773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:31:33.797814  213773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:31:33.798066  213773 out.go:368] Setting JSON to false
	I1124 03:31:33.798165  213773 mustload.go:66] Loading cluster: scheduled-stop-127118
	I1124 03:31:33.798519  213773 config.go:182] Loaded profile config "scheduled-stop-127118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:31:33.798618  213773 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/scheduled-stop-127118/config.json ...
	I1124 03:31:33.798837  213773 mustload.go:66] Loading cluster: scheduled-stop-127118
	I1124 03:31:33.798951  213773 config.go:182] Loaded profile config "scheduled-stop-127118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-127118
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-127118: exit status 7 (64.388256ms)

                                                
                                                
-- stdout --
	scheduled-stop-127118
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-127118 -n scheduled-stop-127118
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-127118 -n scheduled-stop-127118: exit status 7 (63.042237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-127118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-127118
--- PASS: TestScheduledStopUnix (107.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (119.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3894681433 start -p running-upgrade-486043 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3894681433 start -p running-upgrade-486043 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m31.52020315s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-486043 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-486043 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (24.193441428s)
helpers_test.go:175: Cleaning up "running-upgrade-486043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-486043
--- PASS: TestRunningBinaryUpgrade (119.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (83.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.162946254s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-469670
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-469670: (1.723244599s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-469670 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-469670 status --format={{.Host}}: exit status 7 (63.656731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.344454181s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-469670 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (81.893593ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-469670] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-469670
	    minikube start -p kubernetes-upgrade-469670 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4696702 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-469670 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-469670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (13.219865834s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-469670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-469670
--- PASS: TestKubernetesUpgrade (83.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-449227 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-449227 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (92.52179ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-449227] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (93.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-793115 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-793115 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m33.17144603s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (93.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (77.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-449227 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-449227 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.802248533s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-449227 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (77.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-449227 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-449227 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (5.495196567s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-449227 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-449227 status -o json: exit status 2 (208.897266ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-449227","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-449227
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (20.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-449227 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-449227 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (20.069532857s)
--- PASS: TestNoKubernetes/serial/Start (20.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-793115 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c291ee81-7310-40e4-ad8e-300c99d67d46] Pending
helpers_test.go:352: "busybox" [c291ee81-7310-40e4-ad8e-300c99d67d46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c291ee81-7310-40e4-ad8e-300c99d67d46] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.006124348s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-793115 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-584458 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-584458 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (134.812086ms)

                                                
                                                
-- stdout --
	* [false-584458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:34:03.079809  216171 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:34:03.079902  216171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:34:03.079912  216171 out.go:374] Setting ErrFile to fd 2...
	I1124 03:34:03.079916  216171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:34:03.080153  216171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-185833/.minikube/bin
	I1124 03:34:03.080669  216171 out.go:368] Setting JSON to false
	I1124 03:34:03.082035  216171 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11783,"bootTime":1763943460,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:34:03.082143  216171 start.go:143] virtualization: kvm guest
	I1124 03:34:03.084326  216171 out.go:179] * [false-584458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:34:03.085694  216171 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:34:03.085740  216171 notify.go:221] Checking for updates...
	I1124 03:34:03.088681  216171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:34:03.089950  216171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-185833/kubeconfig
	I1124 03:34:03.091015  216171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-185833/.minikube
	I1124 03:34:03.092112  216171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:34:03.093251  216171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:34:03.094917  216171 config.go:182] Loaded profile config "NoKubernetes-449227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1124 03:34:03.095036  216171 config.go:182] Loaded profile config "old-k8s-version-793115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:34:03.095137  216171 config.go:182] Loaded profile config "running-upgrade-486043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1124 03:34:03.095285  216171 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:34:03.136990  216171 out.go:179] * Using the kvm2 driver based on user configuration
	I1124 03:34:03.138166  216171 start.go:309] selected driver: kvm2
	I1124 03:34:03.138190  216171 start.go:927] validating driver "kvm2" against <nil>
	I1124 03:34:03.138209  216171 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:34:03.140448  216171 out.go:203] 
	W1124 03:34:03.141602  216171 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1124 03:34:03.142748  216171 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-584458 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-584458" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:33:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.232:8443
name: old-k8s-version-793115
contexts:
- context:
cluster: old-k8s-version-793115
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:33:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: old-k8s-version-793115
name: old-k8s-version-793115
current-context: ""
kind: Config
users:
- name: old-k8s-version-793115
user:
client-certificate: /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt
client-key: /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-584458

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-584458"

                                                
                                                
----------------------- debugLogs end: false-584458 [took: 3.968808971s] --------------------------------
helpers_test.go:175: Cleaning up "false-584458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-584458
--- PASS: TestNetworkPlugins/group/false (4.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21975-185833/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-449227 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-449227 "sudo systemctl is-active --quiet service kubelet": exit status 1 (183.681707ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (1.360006342s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-793115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-793115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.223712493s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-793115 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (83.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-793115 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-793115 --alsologtostderr -v=3: (1m23.452779093s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (83.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-449227
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-449227: (1.507736911s)
--- PASS: TestNoKubernetes/serial/Stop (1.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (17.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-449227 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-449227 --driver=kvm2  --container-runtime=crio: (17.883457028s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (17.88s)

                                                
                                    
x
+
TestISOImage/Setup (30.04s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-288632 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-288632 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.036771486s)
--- PASS: TestISOImage/Setup (30.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-449227 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-449227 "sudo systemctl is-active --quiet service kubelet": exit status 1 (165.578453ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.26s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-793115 -n old-k8s-version-793115
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-793115 -n old-k8s-version-793115: exit status 7 (69.403845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-793115 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (72.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-793115 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-793115 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m12.619697304s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-793115 -n old-k8s-version-793115
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (72.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (132.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-646844 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 03:35:45.499541  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-646844 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (2m12.094213336s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (132.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-780317 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-780317 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m35.426340346s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8m6gm" [e11606a7-0265-42c3-80d6-04368b1204db] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8m6gm" [e11606a7-0265-42c3-80d6-04368b1204db] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004422373s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8m6gm" [e11606a7-0265-42c3-80d6-04368b1204db] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004269137s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-793115 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-793115 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-793115 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-793115 -n old-k8s-version-793115
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-793115 -n old-k8s-version-793115: exit status 2 (238.316115ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-793115 -n old-k8s-version-793115
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-793115 -n old-k8s-version-793115: exit status 2 (243.822725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-793115 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-793115 -n old-k8s-version-793115
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-793115 -n old-k8s-version-793115
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-646844 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3e273d7c-ca7c-4f15-94e6-f59e61ac5d42] Pending
helpers_test.go:352: "busybox" [3e273d7c-ca7c-4f15-94e6-f59e61ac5d42] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3e273d7c-ca7c-4f15-94e6-f59e61ac5d42] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005363483s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-646844 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-780317 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cd391a6b-2145-4927-85d8-62de51395eea] Pending
helpers_test.go:352: "busybox" [cd391a6b-2145-4927-85d8-62de51395eea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cd391a6b-2145-4927-85d8-62de51395eea] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003218518s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-780317 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-646844 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-646844 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (85.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-646844 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-646844 --alsologtostderr -v=3: (1m25.010264564s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (85.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-780317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-780317 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (83.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-780317 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-780317 --alsologtostderr -v=3: (1m23.534658427s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (83.53s)

                                                
                                    
x
+
TestPause/serial/Start (77.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-338254 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E1124 03:38:52.914777  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:52.921158  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:52.932490  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:52.953849  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:52.995308  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:53.076928  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:53.239251  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:53.561016  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:54.203095  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:55.484730  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-338254 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m17.984222102s)
--- PASS: TestPause/serial/Start (77.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-871319 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-871319 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (55.417197041s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646844 -n no-preload-646844
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646844 -n no-preload-646844: exit status 7 (63.814476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-646844 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (67.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-646844 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 03:39:33.893124  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-646844 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m6.974110955s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646844 -n no-preload-646844
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (67.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-780317 -n embed-certs-780317
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-780317 -n embed-certs-780317: exit status 7 (83.235772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-780317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (62.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-780317 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-780317 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.602297994s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-780317 -n embed-certs-780317
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (62.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-871319 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [97b5b469-1772-4d33-853f-1bdb571cff02] Pending
helpers_test.go:352: "busybox" [97b5b469-1772-4d33-853f-1bdb571cff02] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1124 03:40:14.855477  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [97b5b469-1772-4d33-853f-1bdb571cff02] Running
E1124 03:40:19.098927  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.005901792s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-871319 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-871319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-871319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.061132724s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-871319 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (88.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-871319 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-871319 --alsologtostderr -v=3: (1m28.411700152s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (88.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dzc98" [9be04554-0fc3-4c5e-a2bb-2040e50b7a73] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004683132s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nnxwn" [64b8cd54-dc30-4fed-bfe9-9ad74d17f69a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004672763s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dzc98" [9be04554-0fc3-4c5e-a2bb-2040e50b7a73] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003852998s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-646844 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nnxwn" [64b8cd54-dc30-4fed-bfe9-9ad74d17f69a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004899359s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-780317 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-646844 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-646844 --alsologtostderr -v=1
E1124 03:40:45.497770  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/addons-775116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-646844 -n no-preload-646844
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-646844 -n no-preload-646844: exit status 2 (218.260516ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-646844 -n no-preload-646844
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-646844 -n no-preload-646844: exit status 2 (215.974663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-646844 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-646844 -n no-preload-646844
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-646844 -n no-preload-646844
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-780317 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-780317 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-780317 -n embed-certs-780317
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-780317 -n embed-certs-780317: exit status 2 (236.819064ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-780317 -n embed-certs-780317
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-780317 -n embed-certs-780317: exit status 2 (231.836845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-780317 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-780317 -n embed-certs-780317
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-780317 -n embed-certs-780317
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-788142 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-788142 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (45.62810764s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (97.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m37.587795057s)
--- PASS: TestNetworkPlugins/group/auto/Start (97.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3824261202 start -p stopped-upgrade-787828 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3824261202 start -p stopped-upgrade-787828 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m15.364204511s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3824261202 -p stopped-upgrade-787828 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3824261202 -p stopped-upgrade-787828 stop: (1.809302294s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-787828 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-787828 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.769069691s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-788142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-788142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.379928008s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-788142 --alsologtostderr -v=3
E1124 03:41:36.777256  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-788142 --alsologtostderr -v=3: (11.568883578s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-788142 -n newest-cni-788142
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-788142 -n newest-cni-788142: exit status 7 (76.238554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-788142 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-788142 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-788142 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (33.835023096s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-788142 -n newest-cni-788142
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871319 -n default-k8s-diff-port-871319
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871319 -n default-k8s-diff-port-871319: exit status 7 (87.842436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-871319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (59.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-871319 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-871319 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (59.30701698s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871319 -n default-k8s-diff-port-871319
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (59.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-788142 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-788142 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-788142 --alsologtostderr -v=1: (1.446099854s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-788142 -n newest-cni-788142
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-788142 -n newest-cni-788142: exit status 2 (361.058257ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-788142 -n newest-cni-788142
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-788142 -n newest-cni-788142: exit status 2 (333.279358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-788142 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-788142 --alsologtostderr -v=1: (1.081149729s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-788142 -n newest-cni-788142
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-788142 -n newest-cni-788142
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (99.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m39.297278026s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (99.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-584458 "pgrep -a kubelet"
I1124 03:42:31.445247  189749 config.go:182] Loaded profile config "auto-584458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-584458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f2jv8" [10c11738-afbf-4bfc-953f-d0cae89ff2be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f2jv8" [10c11738-afbf-4bfc-953f-d0cae89ff2be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004447009s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-584458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sjngq" [a6a58597-3edb-4ce8-9d98-6706b90572e9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1124 03:42:50.445665  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:42:50.452237  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:42:50.463724  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:42:50.485225  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:42:50.526856  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:42:50.608866  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:42:50.770738  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:42:51.092820  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:42:51.735446  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:42:53.016990  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:42:55.578554  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sjngq" [a6a58597-3edb-4ce8-9d98-6706b90572e9] Running
E1124 03:43:00.700188  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.003548628s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-787828
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-787828: (1.354417276s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m12.886598349s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m30.472929973s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sjngq" [a6a58597-3edb-4ce8-9d98-6706b90572e9] Running
E1124 03:43:10.941621  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004392765s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-871319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-871319 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-871319 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-871319 -n default-k8s-diff-port-871319
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-871319 -n default-k8s-diff-port-871319: exit status 2 (246.586721ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-871319 -n default-k8s-diff-port-871319
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-871319 -n default-k8s-diff-port-871319: exit status 2 (241.498703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-871319 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-871319 -n default-k8s-diff-port-871319
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-871319 -n default-k8s-diff-port-871319
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (113.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1124 03:43:31.423122  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:52.914750  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m53.180255399s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (113.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-hwcxb" [e02b33da-0cba-4d3d-a22f-89ec54311c14] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004807043s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-b6mff" [bbae9386-4dfb-4d5f-aba1-f122c56d1030] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1124 03:44:12.385310  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/no-preload-646844/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-b6mff" [bbae9386-4dfb-4d5f-aba1-f122c56d1030] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005277317s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-584458 "pgrep -a kubelet"
I1124 03:44:13.326548  189749 config.go:182] Loaded profile config "kindnet-584458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-584458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lctfl" [cf1a78b2-d5f2-4032-a65d-eb4fa4bd380d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lctfl" [cf1a78b2-d5f2-4032-a65d-eb4fa4bd380d] Running
E1124 03:44:20.619179  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004794883s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-584458 "pgrep -a kubelet"
I1124 03:44:17.647575  189749 config.go:182] Loaded profile config "calico-584458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-584458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h4npf" [59fdab91-3b87-4278-9cf0-d09ca7020db2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h4npf" [59fdab91-3b87-4278-9cf0-d09ca7020db2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004282114s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-584458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-584458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-584458 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
I1124 03:44:30.201163  189749 config.go:182] Loaded profile config "custom-flannel-584458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-584458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7dbrd" [1649bda4-a2e7-412c-8e15-e8f4e488b15f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7dbrd" [1649bda4-a2e7-412c-8e15-e8f4e488b15f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004139851s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m8.104957596s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-584458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-584458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m34.593714433s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.59s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.19s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: ed59d3745fcfec53d8728304fa00738e8aac7ede
iso_test.go:118:   iso_version: v1.37.0-1763935228-21975
iso_test.go:118:   kicbase_version: v0.0.48-1763789673-21948
--- PASS: TestISOImage/VersionJSON (0.19s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-288632 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)
E1124 03:45:02.175292  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-584458 "pgrep -a kubelet"
I1124 03:45:08.898455  189749 config.go:182] Loaded profile config "enable-default-cni-584458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-584458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fsbl5" [1b76d4da-8ccf-4f6c-9ce1-2efccab34764] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 03:45:09.831251  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:09.837682  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:09.849143  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:09.870590  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:09.912114  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:09.993657  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:10.155394  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:10.477540  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:11.119324  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:12.401767  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:14.963593  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-fsbl5" [1b76d4da-8ccf-4f6c-9ce1-2efccab34764] Running
E1124 03:45:19.099557  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/functional-803727/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:20.085113  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004047394s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-584458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-4mp8c" [32fd5fda-8e79-4fe4-b816-1b0d40f043a3] Running
E1124 03:45:50.808556  189749 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/default-k8s-diff-port-871319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004908687s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-584458 "pgrep -a kubelet"
I1124 03:45:54.074666  189749 config.go:182] Loaded profile config "flannel-584458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-584458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jhbk9" [f943a3f8-08c2-418d-8ab2-74b192a98e89] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jhbk9" [f943a3f8-08c2-418d-8ab2-74b192a98e89] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005133887s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-584458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-584458 "pgrep -a kubelet"
I1124 03:46:20.383843  189749 config.go:182] Loaded profile config "bridge-584458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-584458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gwhdl" [87ff66c3-6c72-40e0-8acf-781f81326137] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gwhdl" [87ff66c3-6c72-40e0-8acf-781f81326137] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.0035892s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-584458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-584458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (40/345)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
261 TestStartStop/group/disable-driver-mounts 0.45
271 TestNetworkPlugins/group/kubenet 3.83
286 TestNetworkPlugins/group/cilium 4.29
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-775116 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-236119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-236119
--- SKIP: TestStartStop/group/disable-driver-mounts (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-584458 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-584458" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:33:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.232:8443
name: old-k8s-version-793115
contexts:
- context:
cluster: old-k8s-version-793115
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:33:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: old-k8s-version-793115
name: old-k8s-version-793115
current-context: ""
kind: Config
users:
- name: old-k8s-version-793115
user:
client-certificate: /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt
client-key: /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-584458

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-584458"

                                                
                                                
----------------------- debugLogs end: kubenet-584458 [took: 3.633250346s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-584458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-584458
--- SKIP: TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-584458 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-584458" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:33:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.232:8443
name: old-k8s-version-793115
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-185833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:34:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.216:8443
name: running-upgrade-486043
contexts:
- context:
cluster: old-k8s-version-793115
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:33:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: old-k8s-version-793115
name: old-k8s-version-793115
- context:
cluster: running-upgrade-486043
user: running-upgrade-486043
name: running-upgrade-486043
current-context: running-upgrade-486043
kind: Config
users:
- name: old-k8s-version-793115
user:
client-certificate: /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.crt
client-key: /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/old-k8s-version-793115/client.key
- name: running-upgrade-486043
user:
client-certificate: /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/running-upgrade-486043/client.crt
client-key: /home/jenkins/minikube-integration/21975-185833/.minikube/profiles/running-upgrade-486043/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-584458

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-584458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-584458"

                                                
                                                
----------------------- debugLogs end: cilium-584458 [took: 4.117363178s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-584458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-584458
--- SKIP: TestNetworkPlugins/group/cilium (4.29s)

                                                
                                    
Copied to clipboard