Test Report: KVM_Linux_crio 22089

                    
                      334c0a8a01ce6327cc86bd51efb70eb94afee1a0:2025-12-10:42712
                    
                

Test fail (4/431)

Order failed test Duration
46 TestAddons/parallel/Ingress 154.04
345 TestPreload 120.03
353 TestKubernetesUpgrade 931.93
371 TestPause/serial/SecondStartNoReconfiguration 48
x
+
TestAddons/parallel/Ingress (154.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-873698 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-873698 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-873698 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [42f127f8-e477-4ebc-a82d-17e652b8be12] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [42f127f8-e477-4ebc-a82d-17e652b8be12] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003873867s
I1210 05:47:36.011389   12588 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-873698 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.363783811s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-873698 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.151
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-873698 -n addons-873698
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-873698 logs -n 25: (1.165708871s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-841800                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-841800 │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │ 10 Dec 25 05:44 UTC │
	│ start   │ --download-only -p binary-mirror-952457 --alsologtostderr --binary-mirror http://127.0.0.1:42765 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-952457 │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │                     │
	│ delete  │ -p binary-mirror-952457                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-952457 │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │ 10 Dec 25 05:44 UTC │
	│ addons  │ disable dashboard -p addons-873698                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │                     │
	│ addons  │ enable dashboard -p addons-873698                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │                     │
	│ start   │ -p addons-873698 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │ 10 Dec 25 05:46 UTC │
	│ addons  │ addons-873698 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │ 10 Dec 25 05:46 UTC │
	│ addons  │ addons-873698 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ enable headlamp -p addons-873698 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ addons-873698 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ ssh     │ addons-873698 ssh cat /opt/local-path-provisioner/pvc-c7207705-a0ff-4ab8-a446-2828f4377906_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ addons-873698 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:48 UTC │
	│ addons  │ addons-873698 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ ip      │ addons-873698 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ addons-873698 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ addons-873698 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ addons-873698 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ ssh     │ addons-873698 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-873698                                                                                                                                                                                                                                                                                                                                                                                         │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ addons-873698 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ addons-873698 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ addons-873698 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ addons-873698 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ addons  │ addons-873698 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │ 10 Dec 25 05:47 UTC │
	│ ip      │ addons-873698 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-873698        │ jenkins │ v1.37.0 │ 10 Dec 25 05:49 UTC │ 10 Dec 25 05:49 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:44:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:44:33.795608   13524 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:44:33.795875   13524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:44:33.795886   13524 out.go:374] Setting ErrFile to fd 2...
	I1210 05:44:33.795889   13524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:44:33.796156   13524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 05:44:33.796730   13524 out.go:368] Setting JSON to false
	I1210 05:44:33.797597   13524 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1618,"bootTime":1765343856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:44:33.797652   13524 start.go:143] virtualization: kvm guest
	I1210 05:44:33.799958   13524 out.go:179] * [addons-873698] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:44:33.801539   13524 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:44:33.801544   13524 notify.go:221] Checking for updates...
	I1210 05:44:33.802731   13524 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:44:33.804125   13524 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 05:44:33.805476   13524 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 05:44:33.806676   13524 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:44:33.807898   13524 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:44:33.809304   13524 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:44:33.840258   13524 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 05:44:33.841404   13524 start.go:309] selected driver: kvm2
	I1210 05:44:33.841416   13524 start.go:927] validating driver "kvm2" against <nil>
	I1210 05:44:33.841427   13524 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:44:33.842148   13524 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:44:33.842414   13524 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:44:33.842437   13524 cni.go:84] Creating CNI manager for ""
	I1210 05:44:33.842483   13524 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:44:33.842491   13524 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:44:33.842530   13524 start.go:353] cluster config:
	{Name:addons-873698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-873698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1210 05:44:33.842626   13524 iso.go:125] acquiring lock: {Name:mk873366a783b9f735599145b1cff21faf50318e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:44:33.844167   13524 out.go:179] * Starting "addons-873698" primary control-plane node in "addons-873698" cluster
	I1210 05:44:33.845676   13524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 05:44:33.845719   13524 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 05:44:33.845730   13524 cache.go:65] Caching tarball of preloaded images
	I1210 05:44:33.845826   13524 preload.go:238] Found /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 05:44:33.845840   13524 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 05:44:33.846158   13524 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/config.json ...
	I1210 05:44:33.846184   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/config.json: {Name:mkb737e8a51b92ee0839021e16dc32b3690de395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:33.846325   13524 start.go:360] acquireMachinesLock for addons-873698: {Name:mkc15d5369b31c34b8a5517a09471706fa3f291a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 05:44:33.846391   13524 start.go:364] duration metric: took 53.423µs to acquireMachinesLock for "addons-873698"
	I1210 05:44:33.846418   13524 start.go:93] Provisioning new machine with config: &{Name:addons-873698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-873698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:44:33.846467   13524 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 05:44:33.848007   13524 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1210 05:44:33.848169   13524 start.go:159] libmachine.API.Create for "addons-873698" (driver="kvm2")
	I1210 05:44:33.848195   13524 client.go:173] LocalClient.Create starting
	I1210 05:44:33.848267   13524 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem
	I1210 05:44:33.876270   13524 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem
	I1210 05:44:34.083399   13524 main.go:143] libmachine: creating domain...
	I1210 05:44:34.083417   13524 main.go:143] libmachine: creating network...
	I1210 05:44:34.084772   13524 main.go:143] libmachine: found existing default network
	I1210 05:44:34.084931   13524 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:44:34.085438   13524 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bafed0}
	I1210 05:44:34.085567   13524 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-873698</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:44:34.091835   13524 main.go:143] libmachine: creating private network mk-addons-873698 192.168.39.0/24...
	I1210 05:44:34.160974   13524 main.go:143] libmachine: private network mk-addons-873698 192.168.39.0/24 created
	I1210 05:44:34.161297   13524 main.go:143] libmachine: <network>
	  <name>mk-addons-873698</name>
	  <uuid>d559682f-d422-4ecd-b9d6-71db1c0825b7</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:3d:6a:08'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:44:34.161328   13524 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698 ...
	I1210 05:44:34.161347   13524 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22089-8667/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 05:44:34.161375   13524 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 05:44:34.161446   13524 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22089-8667/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22089-8667/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1210 05:44:34.431426   13524 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa...
	I1210 05:44:34.465159   13524 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/addons-873698.rawdisk...
	I1210 05:44:34.465200   13524 main.go:143] libmachine: Writing magic tar header
	I1210 05:44:34.465241   13524 main.go:143] libmachine: Writing SSH key tar header
	I1210 05:44:34.465313   13524 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698 ...
	I1210 05:44:34.465383   13524 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698
	I1210 05:44:34.465406   13524 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698 (perms=drwx------)
	I1210 05:44:34.465415   13524 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22089-8667/.minikube/machines
	I1210 05:44:34.465424   13524 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22089-8667/.minikube/machines (perms=drwxr-xr-x)
	I1210 05:44:34.465434   13524 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 05:44:34.465442   13524 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22089-8667/.minikube (perms=drwxr-xr-x)
	I1210 05:44:34.465449   13524 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22089-8667
	I1210 05:44:34.465468   13524 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22089-8667 (perms=drwxrwxr-x)
	I1210 05:44:34.465481   13524 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1210 05:44:34.465489   13524 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 05:44:34.465497   13524 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1210 05:44:34.465507   13524 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 05:44:34.465515   13524 main.go:143] libmachine: checking permissions on dir: /home
	I1210 05:44:34.465524   13524 main.go:143] libmachine: skipping /home - not owner
	I1210 05:44:34.465528   13524 main.go:143] libmachine: defining domain...
	I1210 05:44:34.466899   13524 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-873698</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/addons-873698.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-873698'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1210 05:44:34.474283   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:62:ca:d8 in network default
	I1210 05:44:34.474858   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:34.474875   13524 main.go:143] libmachine: starting domain...
	I1210 05:44:34.474879   13524 main.go:143] libmachine: ensuring networks are active...
	I1210 05:44:34.475585   13524 main.go:143] libmachine: Ensuring network default is active
	I1210 05:44:34.475918   13524 main.go:143] libmachine: Ensuring network mk-addons-873698 is active
	I1210 05:44:34.476436   13524 main.go:143] libmachine: getting domain XML...
	I1210 05:44:34.477315   13524 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-873698</name>
	  <uuid>eb662851-1838-461b-8c30-e421e58da7d5</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/addons-873698.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:66:4a:5b'/>
	      <source network='mk-addons-873698'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:62:ca:d8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 05:44:35.760276   13524 main.go:143] libmachine: waiting for domain to start...
	I1210 05:44:35.761517   13524 main.go:143] libmachine: domain is now running
	I1210 05:44:35.761532   13524 main.go:143] libmachine: waiting for IP...
	I1210 05:44:35.762228   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:35.762662   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:35.762682   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:35.763387   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:35.763433   13524 retry.go:31] will retry after 202.572171ms: waiting for domain to come up
	I1210 05:44:35.968143   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:35.968810   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:35.968832   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:35.969128   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:35.969169   13524 retry.go:31] will retry after 310.204341ms: waiting for domain to come up
	I1210 05:44:36.280825   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:36.281387   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:36.281403   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:36.281706   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:36.281752   13524 retry.go:31] will retry after 311.186022ms: waiting for domain to come up
	I1210 05:44:36.594397   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:36.594966   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:36.594987   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:36.595317   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:36.595347   13524 retry.go:31] will retry after 372.054927ms: waiting for domain to come up
	I1210 05:44:36.969072   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:36.969643   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:36.969671   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:36.970005   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:36.970047   13524 retry.go:31] will retry after 543.017769ms: waiting for domain to come up
	I1210 05:44:37.514801   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:37.515391   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:37.515409   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:37.515767   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:37.515803   13524 retry.go:31] will retry after 876.711219ms: waiting for domain to come up
	I1210 05:44:38.394146   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:38.394778   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:38.394797   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:38.395077   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:38.395114   13524 retry.go:31] will retry after 972.685405ms: waiting for domain to come up
	I1210 05:44:39.369461   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:39.370072   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:39.370093   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:39.370390   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:39.370427   13524 retry.go:31] will retry after 1.123453525s: waiting for domain to come up
	I1210 05:44:40.495680   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:40.496217   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:40.496232   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:40.496532   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:40.496565   13524 retry.go:31] will retry after 1.144902688s: waiting for domain to come up
	I1210 05:44:41.642968   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:41.643540   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:41.643558   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:41.643920   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:41.643954   13524 retry.go:31] will retry after 1.745044715s: waiting for domain to come up
	I1210 05:44:43.390408   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:43.391004   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:43.391020   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:43.391395   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:43.391433   13524 retry.go:31] will retry after 2.81196174s: waiting for domain to come up
	I1210 05:44:46.206671   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:46.207414   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:46.207434   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:46.207902   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:46.207935   13524 retry.go:31] will retry after 2.791706544s: waiting for domain to come up
	I1210 05:44:49.001878   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:49.002588   13524 main.go:143] libmachine: no network interface addresses found for domain addons-873698 (source=lease)
	I1210 05:44:49.002605   13524 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:44:49.002879   13524 main.go:143] libmachine: unable to find current IP address of domain addons-873698 in network mk-addons-873698 (interfaces detected: [])
	I1210 05:44:49.002923   13524 retry.go:31] will retry after 3.784887214s: waiting for domain to come up
	I1210 05:44:52.791922   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:52.792533   13524 main.go:143] libmachine: domain addons-873698 has current primary IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:52.792555   13524 main.go:143] libmachine: found domain IP: 192.168.39.151
	I1210 05:44:52.792563   13524 main.go:143] libmachine: reserving static IP address...
	I1210 05:44:52.793082   13524 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-873698", mac: "52:54:00:66:4a:5b", ip: "192.168.39.151"} in network mk-addons-873698
	I1210 05:44:52.972638   13524 main.go:143] libmachine: reserved static IP address 192.168.39.151 for domain addons-873698
	I1210 05:44:52.972663   13524 main.go:143] libmachine: waiting for SSH...
	I1210 05:44:52.972677   13524 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 05:44:52.976277   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:52.976845   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:minikube Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:52.976878   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:52.977109   13524 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:52.977483   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1210 05:44:52.977501   13524 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 05:44:53.090095   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:44:53.090452   13524 main.go:143] libmachine: domain creation complete
	I1210 05:44:53.092131   13524 machine.go:94] provisionDockerMachine start ...
	I1210 05:44:53.094738   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.095236   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:53.095265   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.095514   13524 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:53.095718   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1210 05:44:53.095727   13524 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:44:53.208294   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 05:44:53.208322   13524 buildroot.go:166] provisioning hostname "addons-873698"
	I1210 05:44:53.211583   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.212027   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:53.212055   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.212239   13524 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:53.212466   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1210 05:44:53.212482   13524 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-873698 && echo "addons-873698" | sudo tee /etc/hostname
	I1210 05:44:53.341018   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-873698
	
	I1210 05:44:53.344294   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.344766   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:53.344808   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.345005   13524 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:53.345260   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1210 05:44:53.345279   13524 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-873698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-873698/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-873698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:44:53.463512   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:44:53.463538   13524 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8667/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8667/.minikube}
	I1210 05:44:53.463607   13524 buildroot.go:174] setting up certificates
	I1210 05:44:53.463625   13524 provision.go:84] configureAuth start
	I1210 05:44:53.466681   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.467116   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:53.467137   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.469429   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.469759   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:53.469781   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.469927   13524 provision.go:143] copyHostCerts
	I1210 05:44:53.470005   13524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem (1082 bytes)
	I1210 05:44:53.470144   13524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem (1123 bytes)
	I1210 05:44:53.470290   13524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem (1675 bytes)
	I1210 05:44:53.470384   13524 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem org=jenkins.addons-873698 san=[127.0.0.1 192.168.39.151 addons-873698 localhost minikube]
	I1210 05:44:53.590023   13524 provision.go:177] copyRemoteCerts
	I1210 05:44:53.590082   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:44:53.592611   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.592992   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:53.593014   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.593208   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:44:53.680453   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 05:44:53.712423   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 05:44:53.742770   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 05:44:53.773110   13524 provision.go:87] duration metric: took 309.468ms to configureAuth
	I1210 05:44:53.773144   13524 buildroot.go:189] setting minikube options for container-runtime
	I1210 05:44:53.773374   13524 config.go:182] Loaded profile config "addons-873698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:44:53.776487   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.776866   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:53.776903   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:53.777102   13524 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:53.777382   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1210 05:44:53.777405   13524 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 05:44:54.026056   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 05:44:54.026080   13524 machine.go:97] duration metric: took 933.933495ms to provisionDockerMachine
	I1210 05:44:54.026090   13524 client.go:176] duration metric: took 20.177890205s to LocalClient.Create
	I1210 05:44:54.026104   13524 start.go:167] duration metric: took 20.177932199s to libmachine.API.Create "addons-873698"
	I1210 05:44:54.026114   13524 start.go:293] postStartSetup for "addons-873698" (driver="kvm2")
	I1210 05:44:54.026128   13524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:44:54.026210   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:44:54.029688   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.030225   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:54.030254   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.030478   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:44:54.115581   13524 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:44:54.120392   13524 info.go:137] Remote host: Buildroot 2025.02
	I1210 05:44:54.120417   13524 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/addons for local assets ...
	I1210 05:44:54.120500   13524 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/files for local assets ...
	I1210 05:44:54.120524   13524 start.go:296] duration metric: took 94.403486ms for postStartSetup
	I1210 05:44:54.123604   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.124024   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:54.124055   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.124290   13524 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/config.json ...
	I1210 05:44:54.124502   13524 start.go:128] duration metric: took 20.278025663s to createHost
	I1210 05:44:54.127149   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.127600   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:54.127623   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.127825   13524 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:54.128016   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1210 05:44:54.128030   13524 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 05:44:54.237125   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765345494.199139708
	
	I1210 05:44:54.237149   13524 fix.go:216] guest clock: 1765345494.199139708
	I1210 05:44:54.237156   13524 fix.go:229] Guest: 2025-12-10 05:44:54.199139708 +0000 UTC Remote: 2025-12-10 05:44:54.12451481 +0000 UTC m=+20.377940323 (delta=74.624898ms)
	I1210 05:44:54.237174   13524 fix.go:200] guest clock delta is within tolerance: 74.624898ms
	I1210 05:44:54.237180   13524 start.go:83] releasing machines lock for "addons-873698", held for 20.390778356s
	I1210 05:44:54.240161   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.240643   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:54.240669   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.241208   13524 ssh_runner.go:195] Run: cat /version.json
	I1210 05:44:54.241349   13524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:44:54.244286   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.244466   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.244694   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:54.244723   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.244880   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:44:54.244905   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:54.244932   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:54.245116   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:44:54.324175   13524 ssh_runner.go:195] Run: systemctl --version
	I1210 05:44:54.357776   13524 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 05:44:54.525092   13524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:44:54.532076   13524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:44:54.532167   13524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:44:54.552897   13524 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 05:44:54.552926   13524 start.go:496] detecting cgroup driver to use...
	I1210 05:44:54.552997   13524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:44:54.572700   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:44:54.590206   13524 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:44:54.590286   13524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:44:54.608942   13524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:44:54.626228   13524 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:44:54.775106   13524 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:44:54.992781   13524 docker.go:234] disabling docker service ...
	I1210 05:44:54.992858   13524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:44:55.008723   13524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:44:55.023846   13524 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:44:55.174642   13524 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:44:55.315935   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:44:55.331462   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:44:55.353715   13524 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 05:44:55.353807   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:55.366014   13524 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 05:44:55.366076   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:55.378463   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:55.391274   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:55.403387   13524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:44:55.416061   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:55.428055   13524 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:55.447807   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:55.459429   13524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:44:55.469446   13524 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 05:44:55.469517   13524 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 05:44:55.489208   13524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:44:55.500495   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:44:55.639099   13524 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 05:44:55.759117   13524 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 05:44:55.759196   13524 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 05:44:55.764341   13524 start.go:564] Will wait 60s for crictl version
	I1210 05:44:55.764426   13524 ssh_runner.go:195] Run: which crictl
	I1210 05:44:55.768411   13524 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 05:44:55.804939   13524 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 05:44:55.805083   13524 ssh_runner.go:195] Run: crio --version
	I1210 05:44:55.835290   13524 ssh_runner.go:195] Run: crio --version
	I1210 05:44:55.866978   13524 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1210 05:44:55.871890   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:55.872243   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:44:55.872260   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:44:55.872470   13524 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 05:44:55.877488   13524 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:44:55.893107   13524 kubeadm.go:884] updating cluster {Name:addons-873698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-873698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.151 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:44:55.893250   13524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 05:44:55.893299   13524 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:44:55.923586   13524 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1210 05:44:55.923673   13524 ssh_runner.go:195] Run: which lz4
	I1210 05:44:55.928042   13524 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 05:44:55.932752   13524 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 05:44:55.932787   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1210 05:44:57.087089   13524 crio.go:462] duration metric: took 1.159076277s to copy over tarball
	I1210 05:44:57.087174   13524 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 05:44:58.563941   13524 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.476720459s)
	I1210 05:44:58.563970   13524 crio.go:469] duration metric: took 1.47684595s to extract the tarball
	I1210 05:44:58.563977   13524 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 05:44:58.599994   13524 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:44:58.639454   13524 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 05:44:58.639491   13524 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:44:58.639501   13524 kubeadm.go:935] updating node { 192.168.39.151 8443 v1.34.2 crio true true} ...
	I1210 05:44:58.639605   13524 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-873698 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-873698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:44:58.639695   13524 ssh_runner.go:195] Run: crio config
	I1210 05:44:58.687979   13524 cni.go:84] Creating CNI manager for ""
	I1210 05:44:58.688002   13524 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:44:58.688019   13524 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:44:58.688039   13524 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.151 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-873698 NodeName:addons-873698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:44:58.688196   13524 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-873698"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:44:58.688269   13524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 05:44:58.700989   13524 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:44:58.701053   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:44:58.712921   13524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 05:44:58.732619   13524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 05:44:58.751989   13524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1210 05:44:58.771608   13524 ssh_runner.go:195] Run: grep 192.168.39.151	control-plane.minikube.internal$ /etc/hosts
	I1210 05:44:58.775785   13524 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:44:58.790146   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:44:58.933268   13524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:44:58.969137   13524 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698 for IP: 192.168.39.151
	I1210 05:44:58.969163   13524 certs.go:195] generating shared ca certs ...
	I1210 05:44:58.969179   13524 certs.go:227] acquiring lock for ca certs: {Name:mkbf1082c8328cc7c1360f5f8b344958e8aa5792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:58.969329   13524 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key
	I1210 05:44:59.085093   13524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt ...
	I1210 05:44:59.085119   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt: {Name:mk39595fd4502b82833e0b275a503ece56d5b419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:59.085301   13524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key ...
	I1210 05:44:59.085320   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key: {Name:mk8b47a16c27edaf42f99d061738b085289faafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:59.085436   13524 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key
	I1210 05:44:59.196632   13524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.crt ...
	I1210 05:44:59.196659   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.crt: {Name:mk58f8f7325fa7f1ecf5f62c189b4b4608b6e31b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:59.196841   13524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key ...
	I1210 05:44:59.196859   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key: {Name:mka34365bce61f844a667d78e1fbc7a69a76f10f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:59.196958   13524 certs.go:257] generating profile certs ...
	I1210 05:44:59.197050   13524 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.key
	I1210 05:44:59.197080   13524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt with IP's: []
	I1210 05:44:59.287365   13524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt ...
	I1210 05:44:59.287394   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: {Name:mk5e3338a914ea1a2edd73da4a7a62d45990385f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:59.287581   13524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.key ...
	I1210 05:44:59.287601   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.key: {Name:mk898a745d5c2d33eb958593e54e50447b638e24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:59.288184   13524 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.key.b2d69004
	I1210 05:44:59.288207   13524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.crt.b2d69004 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.151]
	I1210 05:44:59.359430   13524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.crt.b2d69004 ...
	I1210 05:44:59.359462   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.crt.b2d69004: {Name:mkca2b36b749508851bb8d1e609787c5d1794a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:59.359660   13524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.key.b2d69004 ...
	I1210 05:44:59.359685   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.key.b2d69004: {Name:mk48c59bc735672eaa4d1fc36be77ce76bc34c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:59.359790   13524 certs.go:382] copying /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.crt.b2d69004 -> /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.crt
	I1210 05:44:59.359884   13524 certs.go:386] copying /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.key.b2d69004 -> /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.key
	I1210 05:44:59.359953   13524 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/proxy-client.key
	I1210 05:44:59.359979   13524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/proxy-client.crt with IP's: []
	I1210 05:44:59.539108   13524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/proxy-client.crt ...
	I1210 05:44:59.539134   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/proxy-client.crt: {Name:mk8d71f60357cb2e73962b11a234a79eba17b9f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:59.539308   13524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/proxy-client.key ...
	I1210 05:44:59.539325   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/proxy-client.key: {Name:mk360a86ba78b9d3b3c73f43ff87312f80fc8c0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:59.539588   13524 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:44:59.539633   13524 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem (1082 bytes)
	I1210 05:44:59.539680   13524 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:44:59.539711   13524 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem (1675 bytes)
	I1210 05:44:59.540279   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:44:59.573769   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:44:59.607113   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:44:59.640014   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:44:59.681643   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 05:44:59.721585   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 05:44:59.751153   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:44:59.778894   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 05:44:59.808957   13524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:44:59.838719   13524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:44:59.858760   13524 ssh_runner.go:195] Run: openssl version
	I1210 05:44:59.865152   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:59.877233   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:44:59.889390   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:59.894628   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:59.894702   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:59.902229   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:44:59.914121   13524 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 05:44:59.925972   13524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:44:59.931696   13524 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 05:44:59.931763   13524 kubeadm.go:401] StartCluster: {Name:addons-873698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-873698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.151 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:44:59.931830   13524 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:44:59.931878   13524 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:44:59.966815   13524 cri.go:89] found id: ""
	I1210 05:44:59.966890   13524 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:44:59.979377   13524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:44:59.991831   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:45:00.003955   13524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:45:00.003979   13524 kubeadm.go:158] found existing configuration files:
	
	I1210 05:45:00.004027   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 05:45:00.015263   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:45:00.015331   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:45:00.027024   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 05:45:00.038100   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:45:00.038170   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:45:00.049719   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 05:45:00.060507   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:45:00.060574   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:45:00.071805   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 05:45:00.082477   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:45:00.082554   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:45:00.093883   13524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 05:45:00.244562   13524 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:45:12.557349   13524 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 05:45:12.557417   13524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:45:12.557475   13524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:45:12.557589   13524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:45:12.557681   13524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:45:12.557737   13524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:45:12.559505   13524 out.go:252]   - Generating certificates and keys ...
	I1210 05:45:12.559606   13524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:45:12.559703   13524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:45:12.559816   13524 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 05:45:12.559903   13524 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 05:45:12.559985   13524 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 05:45:12.560059   13524 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 05:45:12.560152   13524 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 05:45:12.560324   13524 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-873698 localhost] and IPs [192.168.39.151 127.0.0.1 ::1]
	I1210 05:45:12.560423   13524 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 05:45:12.560592   13524 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-873698 localhost] and IPs [192.168.39.151 127.0.0.1 ::1]
	I1210 05:45:12.560676   13524 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 05:45:12.560727   13524 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 05:45:12.560771   13524 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 05:45:12.560850   13524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:45:12.560919   13524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:45:12.561002   13524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:45:12.561079   13524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:45:12.561182   13524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:45:12.561264   13524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:45:12.561387   13524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:45:12.561452   13524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:45:12.562702   13524 out.go:252]   - Booting up control plane ...
	I1210 05:45:12.562787   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:45:12.562877   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:45:12.562948   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:45:12.563046   13524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:45:12.563170   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:45:12.563279   13524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:45:12.563370   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:45:12.563406   13524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:45:12.563534   13524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:45:12.563631   13524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:45:12.563688   13524 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001616823s
	I1210 05:45:12.563781   13524 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 05:45:12.563866   13524 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.151:8443/livez
	I1210 05:45:12.563959   13524 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 05:45:12.564033   13524 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 05:45:12.564097   13524 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.995542067s
	I1210 05:45:12.564155   13524 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.427845438s
	I1210 05:45:12.564240   13524 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502066229s
	I1210 05:45:12.564378   13524 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 05:45:12.564513   13524 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 05:45:12.564567   13524 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 05:45:12.564722   13524 kubeadm.go:319] [mark-control-plane] Marking the node addons-873698 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 05:45:12.564790   13524 kubeadm.go:319] [bootstrap-token] Using token: iki8ya.9xq1mgpqxw5st82n
	I1210 05:45:12.565901   13524 out.go:252]   - Configuring RBAC rules ...
	I1210 05:45:12.565988   13524 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 05:45:12.566060   13524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 05:45:12.566183   13524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 05:45:12.566302   13524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 05:45:12.566418   13524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 05:45:12.566501   13524 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 05:45:12.566617   13524 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 05:45:12.566684   13524 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 05:45:12.566754   13524 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 05:45:12.566761   13524 kubeadm.go:319] 
	I1210 05:45:12.566838   13524 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 05:45:12.566851   13524 kubeadm.go:319] 
	I1210 05:45:12.566939   13524 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 05:45:12.566947   13524 kubeadm.go:319] 
	I1210 05:45:12.566969   13524 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 05:45:12.567032   13524 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 05:45:12.567113   13524 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 05:45:12.567129   13524 kubeadm.go:319] 
	I1210 05:45:12.567175   13524 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 05:45:12.567181   13524 kubeadm.go:319] 
	I1210 05:45:12.567217   13524 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 05:45:12.567222   13524 kubeadm.go:319] 
	I1210 05:45:12.567291   13524 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 05:45:12.567418   13524 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 05:45:12.567516   13524 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 05:45:12.567526   13524 kubeadm.go:319] 
	I1210 05:45:12.567676   13524 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 05:45:12.567763   13524 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 05:45:12.567779   13524 kubeadm.go:319] 
	I1210 05:45:12.567851   13524 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token iki8ya.9xq1mgpqxw5st82n \
	I1210 05:45:12.567937   13524 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1dfe0c9aa29dc58ecc184390bf98cfb6755884a7646c65f6333d6ae241a1230d \
	I1210 05:45:12.567955   13524 kubeadm.go:319] 	--control-plane 
	I1210 05:45:12.567958   13524 kubeadm.go:319] 
	I1210 05:45:12.568045   13524 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 05:45:12.568059   13524 kubeadm.go:319] 
	I1210 05:45:12.568131   13524 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token iki8ya.9xq1mgpqxw5st82n \
	I1210 05:45:12.568230   13524 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1dfe0c9aa29dc58ecc184390bf98cfb6755884a7646c65f6333d6ae241a1230d 
	I1210 05:45:12.568241   13524 cni.go:84] Creating CNI manager for ""
	I1210 05:45:12.568247   13524 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:45:12.569591   13524 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 05:45:12.570866   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 05:45:12.584715   13524 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 05:45:12.609387   13524 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 05:45:12.609540   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:12.609542   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-873698 minikube.k8s.io/updated_at=2025_12_10T05_45_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=addons-873698 minikube.k8s.io/primary=true
	I1210 05:45:12.664733   13524 ops.go:34] apiserver oom_adj: -16
	I1210 05:45:12.731612   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:13.232397   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:13.732347   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:14.232542   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:14.732374   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:15.232326   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:15.732529   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:16.232440   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:16.731748   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:17.231947   13524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:45:17.345883   13524 kubeadm.go:1114] duration metric: took 4.736429922s to wait for elevateKubeSystemPrivileges
	I1210 05:45:17.345928   13524 kubeadm.go:403] duration metric: took 17.41416837s to StartCluster
	I1210 05:45:17.345954   13524 settings.go:142] acquiring lock: {Name:mk3d395dc9d24e60f90f67efa719ff71be48daf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:45:17.346094   13524 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 05:45:17.346512   13524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/kubeconfig: {Name:mke7eeebab9139e29de7a6356b74da28e2a42365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:45:17.346962   13524 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.151 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:45:17.346978   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 05:45:17.347069   13524 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 05:45:17.347205   13524 addons.go:70] Setting yakd=true in profile "addons-873698"
	I1210 05:45:17.347228   13524 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-873698"
	I1210 05:45:17.347233   13524 addons.go:70] Setting inspektor-gadget=true in profile "addons-873698"
	I1210 05:45:17.347251   13524 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-873698"
	I1210 05:45:17.347244   13524 addons.go:70] Setting registry-creds=true in profile "addons-873698"
	I1210 05:45:17.347256   13524 addons.go:239] Setting addon inspektor-gadget=true in "addons-873698"
	I1210 05:45:17.347264   13524 addons.go:70] Setting volumesnapshots=true in profile "addons-873698"
	I1210 05:45:17.347278   13524 addons.go:239] Setting addon registry-creds=true in "addons-873698"
	I1210 05:45:17.347280   13524 addons.go:239] Setting addon volumesnapshots=true in "addons-873698"
	I1210 05:45:17.347292   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.347292   13524 addons.go:70] Setting metrics-server=true in profile "addons-873698"
	I1210 05:45:17.347304   13524 addons.go:239] Setting addon metrics-server=true in "addons-873698"
	I1210 05:45:17.347305   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.347310   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.347318   13524 addons.go:70] Setting storage-provisioner=true in profile "addons-873698"
	I1210 05:45:17.347331   13524 addons.go:239] Setting addon storage-provisioner=true in "addons-873698"
	I1210 05:45:17.347321   13524 addons.go:70] Setting default-storageclass=true in profile "addons-873698"
	I1210 05:45:17.347339   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.347364   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.347364   13524 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-873698"
	I1210 05:45:17.347300   13524 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-873698"
	I1210 05:45:17.348231   13524 addons.go:70] Setting registry=true in profile "addons-873698"
	I1210 05:45:17.348230   13524 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-873698"
	I1210 05:45:17.348244   13524 addons.go:239] Setting addon registry=true in "addons-873698"
	I1210 05:45:17.348267   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.348298   13524 addons.go:70] Setting ingress=true in profile "addons-873698"
	I1210 05:45:17.348316   13524 addons.go:239] Setting addon ingress=true in "addons-873698"
	I1210 05:45:17.348346   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.347252   13524 addons.go:70] Setting volcano=true in profile "addons-873698"
	I1210 05:45:17.348413   13524 addons.go:239] Setting addon volcano=true in "addons-873698"
	I1210 05:45:17.348441   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.347282   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.348738   13524 addons.go:70] Setting gcp-auth=true in profile "addons-873698"
	I1210 05:45:17.348750   13524 addons.go:70] Setting cloud-spanner=true in profile "addons-873698"
	I1210 05:45:17.348763   13524 mustload.go:66] Loading cluster: addons-873698
	I1210 05:45:17.348772   13524 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-873698"
	I1210 05:45:17.348805   13524 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-873698"
	I1210 05:45:17.348827   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.348935   13524 config.go:182] Loaded profile config "addons-873698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:45:17.349125   13524 addons.go:70] Setting ingress-dns=true in profile "addons-873698"
	I1210 05:45:17.349156   13524 addons.go:239] Setting addon ingress-dns=true in "addons-873698"
	I1210 05:45:17.349184   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.349542   13524 out.go:179] * Verifying Kubernetes components...
	I1210 05:45:17.347210   13524 config.go:182] Loaded profile config "addons-873698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:45:17.347241   13524 addons.go:239] Setting addon yakd=true in "addons-873698"
	I1210 05:45:17.349844   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.348738   13524 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-873698"
	I1210 05:45:17.350018   13524 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-873698"
	I1210 05:45:17.350059   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.348763   13524 addons.go:239] Setting addon cloud-spanner=true in "addons-873698"
	I1210 05:45:17.350209   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.351413   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1210 05:45:17.355717   13524 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 05:45:17.355857   13524 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 05:45:17.355896   13524 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:45:17.355935   13524 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 05:45:17.355950   13524 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 05:45:17.355951   13524 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 05:45:17.355975   13524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:45:17.356148   13524 addons.go:239] Setting addon default-storageclass=true in "addons-873698"
	I1210 05:45:17.357057   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.356333   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.356505   13524 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-873698"
	I1210 05:45:17.357814   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:17.357384   13524 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:45:17.358095   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:45:17.358113   13524 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:45:17.358423   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 05:45:17.358428   13524 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:45:17.358442   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 05:45:17.357388   13524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 05:45:17.358651   13524 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 05:45:17.358716   13524 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 05:45:17.358734   13524 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 05:45:17.358137   13524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 05:45:17.359623   13524 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 05:45:17.359630   13524 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 05:45:17.359658   13524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 05:45:17.359624   13524 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 05:45:17.359696   13524 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 05:45:17.359644   13524 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 05:45:17.359632   13524 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 05:45:17.361124   13524 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:45:17.361141   13524 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:45:17.361185   13524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 05:45:17.361194   13524 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:45:17.361600   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 05:45:17.362071   13524 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 05:45:17.362087   13524 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:45:17.362093   13524 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 05:45:17.362099   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 05:45:17.362125   13524 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:45:17.362412   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 05:45:17.362088   13524 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 05:45:17.362501   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 05:45:17.362883   13524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:45:17.363940   13524 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 05:45:17.363956   13524 out.go:179]   - Using image docker.io/busybox:stable
	I1210 05:45:17.364561   13524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 05:45:17.364688   13524 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:45:17.365007   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 05:45:17.365350   13524 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 05:45:17.365378   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 05:45:17.367245   13524 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 05:45:17.367263   13524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 05:45:17.367248   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.368684   13524 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:45:17.368697   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 05:45:17.369236   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.369406   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.369415   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.369641   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.369853   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.370412   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.370533   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.371419   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.371442   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.371454   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.371467   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.371882   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.371915   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.372139   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.372168   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.372452   13524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 05:45:17.373008   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.373023   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.373039   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.373448   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.373752   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.373770   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.374655   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.374908   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.374946   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.375056   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.375085   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.375263   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.375580   13524 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 05:45:17.375730   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.375999   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.376021   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.376060   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.376056   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.376127   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.376798   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.376848   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.376885   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.377196   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.377242   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.377260   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.377287   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.377288   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.377388   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.377753   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.377825   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.377851   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.378273   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.378304   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.378444   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.378474   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.378498   13524 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 05:45:17.378530   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.378792   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.379111   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.379553   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.379589   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.379743   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:17.381037   13524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 05:45:17.382141   13524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 05:45:17.382159   13524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 05:45:17.384401   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.384753   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:17.384776   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:17.384902   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	W1210 05:45:17.686291   13524 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33200->192.168.39.151:22: read: connection reset by peer
	I1210 05:45:17.686323   13524 retry.go:31] will retry after 312.335837ms: ssh: handshake failed: read tcp 192.168.39.1:33200->192.168.39.151:22: read: connection reset by peer
	I1210 05:45:18.133887   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:45:18.215178   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:45:18.228114   13524 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 05:45:18.228147   13524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 05:45:18.275026   13524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:45:18.275288   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 05:45:18.294379   13524 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 05:45:18.294410   13524 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 05:45:18.295412   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:45:18.347609   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 05:45:18.358411   13524 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 05:45:18.358441   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 05:45:18.438262   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:45:18.448593   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:45:18.481234   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:45:18.529963   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:45:18.568346   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:45:18.682390   13524 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 05:45:18.682423   13524 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 05:45:18.705163   13524 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 05:45:18.705202   13524 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 05:45:18.742434   13524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 05:45:18.742486   13524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 05:45:18.878818   13524 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 05:45:18.878841   13524 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 05:45:18.916978   13524 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 05:45:18.917003   13524 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 05:45:19.139231   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:45:19.176818   13524 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 05:45:19.176849   13524 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 05:45:19.270037   13524 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:45:19.270085   13524 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 05:45:19.278158   13524 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:45:19.278178   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 05:45:19.306400   13524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 05:45:19.306424   13524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 05:45:19.340937   13524 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 05:45:19.340976   13524 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 05:45:19.604057   13524 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 05:45:19.604091   13524 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 05:45:19.664247   13524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 05:45:19.664272   13524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 05:45:19.664306   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:45:19.686162   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:45:19.694421   13524 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:45:19.694441   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 05:45:19.956408   13524 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 05:45:19.956446   13524 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 05:45:20.057542   13524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 05:45:20.057573   13524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 05:45:20.090927   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:45:20.236529   13524 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:45:20.236560   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 05:45:20.322181   13524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 05:45:20.322213   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 05:45:20.400652   13524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 05:45:20.400679   13524 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 05:45:20.692893   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:45:20.817899   13524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 05:45:20.817937   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 05:45:21.078605   13524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 05:45:21.078632   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 05:45:21.483379   13524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:45:21.483410   13524 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 05:45:21.888011   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:45:22.579570   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.364348589s)
	I1210 05:45:22.579626   13524 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.304568641s)
	I1210 05:45:22.579675   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.44575157s)
	I1210 05:45:22.579683   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.284250836s)
	I1210 05:45:22.579648   13524 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.304338114s)
	I1210 05:45:22.579729   13524 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1210 05:45:22.579761   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.232114697s)
	I1210 05:45:22.579799   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.131173709s)
	I1210 05:45:22.580475   13524 node_ready.go:35] waiting up to 6m0s for node "addons-873698" to be "Ready" ...
	I1210 05:45:22.613345   13524 node_ready.go:49] node "addons-873698" is "Ready"
	I1210 05:45:22.613383   13524 node_ready.go:38] duration metric: took 32.863096ms for node "addons-873698" to be "Ready" ...
	I1210 05:45:22.613394   13524 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:45:22.613437   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:45:23.109235   13524 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-873698" context rescaled to 1 replicas
	I1210 05:45:23.249851   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.811550419s)
	I1210 05:45:24.800637   13524 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 05:45:24.803470   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:24.803890   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:24.803915   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:24.804083   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:25.098698   13524 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 05:45:25.151744   13524 addons.go:239] Setting addon gcp-auth=true in "addons-873698"
	I1210 05:45:25.151812   13524 host.go:66] Checking if "addons-873698" exists ...
	I1210 05:45:25.153894   13524 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 05:45:25.156467   13524 main.go:143] libmachine: domain addons-873698 has defined MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:25.156952   13524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:4a:5b", ip: ""} in network mk-addons-873698: {Iface:virbr1 ExpiryTime:2025-12-10 06:44:49 +0000 UTC Type:0 Mac:52:54:00:66:4a:5b Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:addons-873698 Clientid:01:52:54:00:66:4a:5b}
	I1210 05:45:25.156979   13524 main.go:143] libmachine: domain addons-873698 has defined IP address 192.168.39.151 and MAC address 52:54:00:66:4a:5b in network mk-addons-873698
	I1210 05:45:25.157131   13524 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/addons-873698/id_rsa Username:docker}
	I1210 05:45:26.336371   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.855085024s)
	I1210 05:45:26.336404   13524 addons.go:495] Verifying addon ingress=true in "addons-873698"
	I1210 05:45:26.336480   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.806476707s)
	I1210 05:45:26.336579   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.768180574s)
	I1210 05:45:26.336688   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.197431829s)
	I1210 05:45:26.336824   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.672496287s)
	I1210 05:45:26.336851   13524 addons.go:495] Verifying addon metrics-server=true in "addons-873698"
	I1210 05:45:26.336916   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.650717817s)
	I1210 05:45:26.336946   13524 addons.go:495] Verifying addon registry=true in "addons-873698"
	I1210 05:45:26.336980   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.246014438s)
	I1210 05:45:26.337090   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.644160894s)
	W1210 05:45:26.337279   13524 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:45:26.337306   13524 retry.go:31] will retry after 340.497385ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:45:26.337629   13524 out.go:179] * Verifying ingress addon...
	I1210 05:45:26.338538   13524 out.go:179] * Verifying registry addon...
	I1210 05:45:26.338544   13524 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-873698 service yakd-dashboard -n yakd-dashboard
	
	I1210 05:45:26.340251   13524 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 05:45:26.341146   13524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 05:45:26.387527   13524 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:45:26.387560   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:26.387599   13524 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 05:45:26.387619   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:26.678348   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:45:26.858642   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:26.858849   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:27.201997   13524 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.588534629s)
	I1210 05:45:27.202040   13524 api_server.go:72] duration metric: took 9.855051663s to wait for apiserver process to appear ...
	I1210 05:45:27.202044   13524 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.048127982s)
	I1210 05:45:27.202048   13524 api_server.go:88] waiting for apiserver healthz status ...
	I1210 05:45:27.202070   13524 api_server.go:253] Checking apiserver healthz at https://192.168.39.151:8443/healthz ...
	I1210 05:45:27.202296   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.314234786s)
	I1210 05:45:27.202335   13524 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-873698"
	I1210 05:45:27.203713   13524 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 05:45:27.203733   13524 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 05:45:27.205343   13524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:45:27.205954   13524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 05:45:27.206449   13524 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 05:45:27.206464   13524 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 05:45:27.231669   13524 api_server.go:279] https://192.168.39.151:8443/healthz returned 200:
	ok
	I1210 05:45:27.235882   13524 api_server.go:141] control plane version: v1.34.2
	I1210 05:45:27.235921   13524 api_server.go:131] duration metric: took 33.865133ms to wait for apiserver health ...
	I1210 05:45:27.235933   13524 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 05:45:27.242856   13524 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:45:27.242877   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:27.302843   13524 system_pods.go:59] 20 kube-system pods found
	I1210 05:45:27.302897   13524 system_pods.go:61] "amd-gpu-device-plugin-h2nzx" [b2185212-d509-4fe1-8751-e03941bebb34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:45:27.302911   13524 system_pods.go:61] "coredns-66bc5c9577-6k4qk" [4c8c9312-49ce-40cc-b0da-d7a1e4ee8171] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:27.302922   13524 system_pods.go:61] "coredns-66bc5c9577-pbs5w" [7d50de10-1d73-4103-9acc-e63b30d392c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:27.302932   13524 system_pods.go:61] "csi-hostpath-attacher-0" [97f575a1-e3fd-4fbd-aa33-d93606d712ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:45:27.302940   13524 system_pods.go:61] "csi-hostpath-resizer-0" [fdbca536-1d4f-4692-8c3b-f7546ab3158d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:45:27.302956   13524 system_pods.go:61] "csi-hostpathplugin-rcczq" [09f2a517-9418-4f88-a343-a87ebb3118a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:45:27.302968   13524 system_pods.go:61] "etcd-addons-873698" [ea1da915-9e6c-467b-873e-ef0ba36ff265] Running
	I1210 05:45:27.302974   13524 system_pods.go:61] "kube-apiserver-addons-873698" [fd83872a-98d1-48e1-af17-7a51b44f187b] Running
	I1210 05:45:27.302979   13524 system_pods.go:61] "kube-controller-manager-addons-873698" [faa48fe2-f72e-4102-9d13-ffac3448fe0f] Running
	I1210 05:45:27.302987   13524 system_pods.go:61] "kube-ingress-dns-minikube" [6a7d26f9-194e-4f3d-80ce-17e899d2b880] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:27.302992   13524 system_pods.go:61] "kube-proxy-xqvf9" [0cb500e8-30a4-4914-913f-c184a12edd2e] Running
	I1210 05:45:27.302999   13524 system_pods.go:61] "kube-scheduler-addons-873698" [1c747244-0851-4ca6-aece-f97becb0264d] Running
	I1210 05:45:27.303009   13524 system_pods.go:61] "metrics-server-85b7d694d7-9jqpv" [f48fc950-6d12-4a7a-adca-ccf9ada338da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:27.303020   13524 system_pods.go:61] "nvidia-device-plugin-daemonset-8lp5b" [c33df452-a40c-4ce5-933c-cb75f4e74e60] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:45:27.303030   13524 system_pods.go:61] "registry-6b586f9694-j46wr" [e47de5e6-f940-443e-ae45-290cf2aa6613] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:45:27.303040   13524 system_pods.go:61] "registry-creds-764b6fb674-mnmxc" [7c626c4b-da8b-4cdf-9e8e-7e3eda7d17f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:27.303054   13524 system_pods.go:61] "registry-proxy-dpzx2" [8e5876c7-e3be-42b2-a3bb-a526b0413ef8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:27.303067   13524 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wmw46" [154fdaca-3eed-44de-b393-79d40da1b162] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:27.303082   13524 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xhbq2" [87bc74e0-ac19-4398-8def-bc7d4034aacd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:27.303091   13524 system_pods.go:61] "storage-provisioner" [5b966211-0174-4fb4-9e9b-f1d1e31f9287] Running
	I1210 05:45:27.303101   13524 system_pods.go:74] duration metric: took 67.160474ms to wait for pod list to return data ...
	I1210 05:45:27.303112   13524 default_sa.go:34] waiting for default service account to be created ...
	I1210 05:45:27.309527   13524 default_sa.go:45] found service account: "default"
	I1210 05:45:27.309560   13524 default_sa.go:55] duration metric: took 6.441628ms for default service account to be created ...
	I1210 05:45:27.309569   13524 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 05:45:27.316786   13524 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:27.316814   13524 system_pods.go:89] "amd-gpu-device-plugin-h2nzx" [b2185212-d509-4fe1-8751-e03941bebb34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:45:27.316821   13524 system_pods.go:89] "coredns-66bc5c9577-6k4qk" [4c8c9312-49ce-40cc-b0da-d7a1e4ee8171] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:27.316829   13524 system_pods.go:89] "coredns-66bc5c9577-pbs5w" [7d50de10-1d73-4103-9acc-e63b30d392c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:27.316835   13524 system_pods.go:89] "csi-hostpath-attacher-0" [97f575a1-e3fd-4fbd-aa33-d93606d712ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:45:27.316845   13524 system_pods.go:89] "csi-hostpath-resizer-0" [fdbca536-1d4f-4692-8c3b-f7546ab3158d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:45:27.316852   13524 system_pods.go:89] "csi-hostpathplugin-rcczq" [09f2a517-9418-4f88-a343-a87ebb3118a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:45:27.316856   13524 system_pods.go:89] "etcd-addons-873698" [ea1da915-9e6c-467b-873e-ef0ba36ff265] Running
	I1210 05:45:27.316860   13524 system_pods.go:89] "kube-apiserver-addons-873698" [fd83872a-98d1-48e1-af17-7a51b44f187b] Running
	I1210 05:45:27.316864   13524 system_pods.go:89] "kube-controller-manager-addons-873698" [faa48fe2-f72e-4102-9d13-ffac3448fe0f] Running
	I1210 05:45:27.316873   13524 system_pods.go:89] "kube-ingress-dns-minikube" [6a7d26f9-194e-4f3d-80ce-17e899d2b880] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:27.316876   13524 system_pods.go:89] "kube-proxy-xqvf9" [0cb500e8-30a4-4914-913f-c184a12edd2e] Running
	I1210 05:45:27.316880   13524 system_pods.go:89] "kube-scheduler-addons-873698" [1c747244-0851-4ca6-aece-f97becb0264d] Running
	I1210 05:45:27.316884   13524 system_pods.go:89] "metrics-server-85b7d694d7-9jqpv" [f48fc950-6d12-4a7a-adca-ccf9ada338da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:27.316890   13524 system_pods.go:89] "nvidia-device-plugin-daemonset-8lp5b" [c33df452-a40c-4ce5-933c-cb75f4e74e60] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:45:27.316896   13524 system_pods.go:89] "registry-6b586f9694-j46wr" [e47de5e6-f940-443e-ae45-290cf2aa6613] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:45:27.316901   13524 system_pods.go:89] "registry-creds-764b6fb674-mnmxc" [7c626c4b-da8b-4cdf-9e8e-7e3eda7d17f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:27.316906   13524 system_pods.go:89] "registry-proxy-dpzx2" [8e5876c7-e3be-42b2-a3bb-a526b0413ef8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:27.316914   13524 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wmw46" [154fdaca-3eed-44de-b393-79d40da1b162] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:27.316919   13524 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xhbq2" [87bc74e0-ac19-4398-8def-bc7d4034aacd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:27.316923   13524 system_pods.go:89] "storage-provisioner" [5b966211-0174-4fb4-9e9b-f1d1e31f9287] Running
	I1210 05:45:27.316930   13524 system_pods.go:126] duration metric: took 7.355253ms to wait for k8s-apps to be running ...
	I1210 05:45:27.316938   13524 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 05:45:27.316978   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:45:27.345041   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:27.347749   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:27.371131   13524 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 05:45:27.371167   13524 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 05:45:27.428458   13524 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:45:27.428481   13524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 05:45:27.538405   13524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:45:27.714199   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:27.850285   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:27.850412   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:28.213621   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:28.346439   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:28.350718   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:28.754314   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:28.755324   13524 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.438322786s)
	I1210 05:45:28.755364   13524 system_svc.go:56] duration metric: took 1.438410666s WaitForService to wait for kubelet
	I1210 05:45:28.755327   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.076926643s)
	I1210 05:45:28.755376   13524 kubeadm.go:587] duration metric: took 11.408386815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:45:28.755396   13524 node_conditions.go:102] verifying NodePressure condition ...
	I1210 05:45:28.838058   13524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.299618666s)
	I1210 05:45:28.839116   13524 addons.go:495] Verifying addon gcp-auth=true in "addons-873698"
	I1210 05:45:28.840621   13524 out.go:179] * Verifying gcp-auth addon...
	I1210 05:45:28.842605   13524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 05:45:28.842679   13524 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 05:45:28.842716   13524 node_conditions.go:123] node cpu capacity is 2
	I1210 05:45:28.842730   13524 node_conditions.go:105] duration metric: took 87.328011ms to run NodePressure ...
	I1210 05:45:28.842743   13524 start.go:242] waiting for startup goroutines ...
	I1210 05:45:28.867902   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:28.869257   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:28.880781   13524 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 05:45:28.880806   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:29.210967   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:29.349899   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:29.349961   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:29.352004   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:29.710898   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:29.853204   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:29.854865   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:29.855351   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:30.211613   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:30.347678   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:30.348793   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:30.351261   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:30.711503   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:30.849227   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:30.850956   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:30.851324   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:31.213658   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:31.346901   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:31.346951   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:31.348194   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:31.709785   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:31.974981   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:31.975164   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:31.975400   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:32.211181   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:32.345484   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:32.345764   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:32.348459   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:32.711049   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:32.843991   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:32.844769   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:32.846225   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:33.210061   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:33.344293   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:33.344636   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:33.345921   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:33.710006   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:33.846389   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:33.847158   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:33.847421   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:34.210683   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:34.347782   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:34.348202   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:34.348902   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:34.712495   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:34.843561   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:34.845302   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:34.845399   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:35.211413   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:35.345820   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:35.345896   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:35.346154   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:35.711025   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:35.845769   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:35.846346   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:35.846514   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:36.210117   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:36.345026   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:36.345552   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:36.346433   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:36.710772   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:36.844815   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:36.845897   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:36.846620   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:37.208990   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:37.344537   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:37.345400   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:37.345448   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:37.710610   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:37.848578   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:37.852072   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:37.852843   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:38.211340   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:38.347293   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:38.349174   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:38.349307   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:38.711805   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:38.846224   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:38.846274   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:38.847520   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:39.211998   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:39.350091   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:39.353108   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:39.353659   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:39.711588   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:39.848439   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:39.854647   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:39.855987   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:40.223014   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:40.346471   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:40.347043   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:40.348619   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:40.710320   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:40.846766   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:40.846840   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:40.846854   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:41.209992   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:41.344036   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:41.345190   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:41.347128   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:41.710635   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:41.844931   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:41.845776   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:41.846642   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:42.210164   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:42.345059   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:42.345126   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:42.346018   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:42.713400   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:42.847815   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:42.851065   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:42.852500   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:43.213347   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:43.348039   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:43.351811   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:43.351902   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:43.711337   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:43.847254   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:43.848785   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:43.849629   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:44.282994   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:44.345295   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:44.349481   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:44.349986   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:44.711039   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:44.846943   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:44.847052   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:44.848455   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:45.210522   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:45.351986   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:45.354778   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:45.356707   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:45.712608   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:45.848264   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:45.850387   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:45.853141   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:46.217600   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:46.346059   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:46.347905   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:46.352212   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:46.710406   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:46.844462   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:46.845197   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:46.846773   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:47.211341   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:47.345476   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:47.346645   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:47.346771   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:47.709296   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:47.849332   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:47.849343   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:47.849863   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:48.213093   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:48.348379   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:48.348622   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:48.349040   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:48.709656   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:48.846195   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:48.847650   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:48.849450   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:49.211224   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:49.348221   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:49.348349   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:49.350308   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:49.711615   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:49.843959   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:49.845200   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:49.845761   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:50.210109   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:50.346071   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:50.346188   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:50.349889   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:50.710891   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:50.862849   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:50.863015   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:50.863095   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:51.209578   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:51.346461   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:51.348769   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:51.349544   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:51.710789   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:51.843641   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:51.846090   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:51.846239   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:52.342496   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:52.347004   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:52.347191   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:52.347289   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:52.711725   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:52.848773   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:52.849336   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:52.850078   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:53.212186   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:53.344559   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:53.346328   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:53.348469   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:53.709733   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:53.847079   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:53.849128   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:53.853320   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:54.210960   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:54.354918   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:54.354972   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:54.356004   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:54.710651   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:54.845227   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:54.845289   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:54.845431   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:55.210227   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:55.344975   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:55.345377   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:55.346964   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:55.709873   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:55.844712   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:55.845087   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:55.846960   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:56.210237   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:56.346095   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:56.346402   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:56.346454   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:56.717868   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:56.848785   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:56.848918   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:56.851931   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:57.210922   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:57.345370   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:57.345556   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:57.350410   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:57.710494   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:57.847032   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:57.847189   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:57.848457   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:58.211941   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:58.344478   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:58.345196   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:58.346108   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:58.711452   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:58.847818   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:58.847945   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:58.849367   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:59.214011   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:59.347685   13524 kapi.go:107] duration metric: took 33.006534342s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 05:45:59.349389   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:59.349890   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:59.710565   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:59.847918   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:59.849608   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:00.215682   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:00.349530   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:00.349884   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:00.709941   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:00.846792   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:00.847121   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:01.210406   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:01.345406   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:01.347527   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:01.710103   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:01.845959   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:01.846109   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:02.209767   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:02.347800   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:02.349422   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:02.713861   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:02.844478   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:02.848369   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:03.210245   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:03.345158   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:03.346823   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:03.710673   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:03.845310   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:03.845938   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:04.210480   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:04.344940   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:04.346486   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:04.710839   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:04.844039   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:04.845977   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:05.210980   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:05.348699   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:05.348924   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:05.709278   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:05.849619   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:05.852111   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:06.211153   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:06.345388   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:06.349434   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:06.711781   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:06.847319   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:06.848832   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:07.210458   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:07.347110   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:07.348899   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:07.713107   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:07.844491   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:07.846993   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:08.210562   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:08.345261   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:08.345270   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:08.711625   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:08.855963   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:08.857955   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:09.209926   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:09.345681   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:09.345777   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:09.709524   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:09.844823   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:09.847173   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:10.211138   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:10.345309   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:10.348388   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:10.715140   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:10.848842   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:10.850044   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:11.401952   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:11.402128   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:11.402760   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:11.711138   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:11.845073   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:11.847699   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:12.218541   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:12.345975   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:12.347377   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:12.711118   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:12.852068   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:12.853073   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:13.209901   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:13.351805   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:13.352065   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:13.713442   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:13.847651   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:13.847826   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:14.223774   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:14.345800   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:14.348800   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:14.804901   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:14.847124   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:14.848799   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:15.223010   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:15.347586   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:15.348781   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:15.710197   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:15.847526   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:15.848912   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:16.212883   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:16.351472   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:16.351642   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:16.713842   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:16.860743   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:16.860839   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:17.213212   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:17.351441   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:17.351764   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:17.711857   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:17.846202   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:17.850172   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:18.212859   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:18.345466   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:18.350598   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:18.712833   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:18.846837   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:18.849555   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:19.214553   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:19.362232   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:19.365118   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:19.710129   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:19.846226   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:19.848131   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:20.210490   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:20.347070   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:20.348345   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:20.712884   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:20.847855   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:20.851578   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:21.212603   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:21.587502   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:21.588024   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:21.711698   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:21.850733   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:21.850960   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:22.210146   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:22.345343   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:22.347053   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:22.711698   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:22.844191   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:22.845447   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:23.211621   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:23.344668   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:23.347508   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:23.710474   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:23.844677   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:23.845647   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:24.210860   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:24.345535   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:24.349471   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:24.711431   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:24.847733   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:24.847905   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:25.210091   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:25.348854   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:25.349158   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:25.712204   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:25.844463   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:25.846348   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:26.211752   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:26.345216   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:26.347344   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:26.712233   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:26.844868   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:26.846967   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:27.213247   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:27.344669   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:27.347127   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:27.711329   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:27.844316   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:27.849227   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:28.212055   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:28.349396   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:28.349630   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:28.712167   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:28.849920   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:28.850244   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:29.212466   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:46:29.344119   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:29.345906   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:29.711101   13524 kapi.go:107] duration metric: took 1m2.505144006s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 05:46:29.845069   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:29.846287   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:30.350847   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:30.353392   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:30.851752   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:30.853389   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:31.349973   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:31.350316   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:31.904773   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:31.904844   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:32.345772   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:32.346944   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:32.848692   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:32.854958   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:33.398213   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:33.398249   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:33.845214   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:33.849489   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:34.347923   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:34.351444   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:34.844763   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:34.845802   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:35.346273   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:35.348758   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:35.846114   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:35.847461   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:36.400944   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:36.401000   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:36.843323   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:36.845166   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:37.345938   13524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:46:37.345948   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:37.846035   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:37.847143   13524 kapi.go:107] duration metric: took 1m11.506886994s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 05:46:38.345616   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:38.848543   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:39.348252   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:39.848675   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:40.349156   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:40.847722   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:41.347183   13524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:46:41.846764   13524 kapi.go:107] duration metric: took 1m13.004152264s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 05:46:41.848174   13524 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-873698 cluster.
	I1210 05:46:41.849374   13524 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 05:46:41.850498   13524 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 05:46:41.851752   13524 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, amd-gpu-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, registry-creds, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1210 05:46:41.852904   13524 addons.go:530] duration metric: took 1m24.505844363s for enable addons: enabled=[nvidia-device-plugin ingress-dns amd-gpu-device-plugin cloud-spanner default-storageclass storage-provisioner registry-creds inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1210 05:46:41.852948   13524 start.go:247] waiting for cluster config update ...
	I1210 05:46:41.852969   13524 start.go:256] writing updated cluster config ...
	I1210 05:46:41.853235   13524 ssh_runner.go:195] Run: rm -f paused
	I1210 05:46:41.860897   13524 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:46:41.864129   13524 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pbs5w" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:41.870646   13524 pod_ready.go:94] pod "coredns-66bc5c9577-pbs5w" is "Ready"
	I1210 05:46:41.870666   13524 pod_ready.go:86] duration metric: took 6.518299ms for pod "coredns-66bc5c9577-pbs5w" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:41.872757   13524 pod_ready.go:83] waiting for pod "etcd-addons-873698" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:41.878277   13524 pod_ready.go:94] pod "etcd-addons-873698" is "Ready"
	I1210 05:46:41.878297   13524 pod_ready.go:86] duration metric: took 5.516017ms for pod "etcd-addons-873698" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:41.880766   13524 pod_ready.go:83] waiting for pod "kube-apiserver-addons-873698" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:41.887968   13524 pod_ready.go:94] pod "kube-apiserver-addons-873698" is "Ready"
	I1210 05:46:41.887989   13524 pod_ready.go:86] duration metric: took 7.205153ms for pod "kube-apiserver-addons-873698" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:41.892497   13524 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-873698" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:42.265091   13524 pod_ready.go:94] pod "kube-controller-manager-addons-873698" is "Ready"
	I1210 05:46:42.265128   13524 pod_ready.go:86] duration metric: took 372.609143ms for pod "kube-controller-manager-addons-873698" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:42.465920   13524 pod_ready.go:83] waiting for pod "kube-proxy-xqvf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:42.865821   13524 pod_ready.go:94] pod "kube-proxy-xqvf9" is "Ready"
	I1210 05:46:42.865856   13524 pod_ready.go:86] duration metric: took 399.906352ms for pod "kube-proxy-xqvf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:43.065118   13524 pod_ready.go:83] waiting for pod "kube-scheduler-addons-873698" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:43.465159   13524 pod_ready.go:94] pod "kube-scheduler-addons-873698" is "Ready"
	I1210 05:46:43.465193   13524 pod_ready.go:86] duration metric: took 400.039338ms for pod "kube-scheduler-addons-873698" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:46:43.465210   13524 pod_ready.go:40] duration metric: took 1.604283727s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:46:43.510580   13524 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 05:46:43.512595   13524 out.go:179] * Done! kubectl is now configured to use "addons-873698" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.727707467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2384511-750c-49ec-8953-d893060394c5 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.728869135Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f10f870-f4ea-432f-ad4c-ab8c6e3f9449 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.730142468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345791730108552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545751,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f10f870-f4ea-432f-ad4c-ab8c6e3f9449 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.730726457Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5dfd122e-b1db-43af-99e6-559ef0874378 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.731064019Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:11fa7b5c4c77d656c19da97d61e90b94d0a3f2dfed168ed3655566808df288d0,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-qcp5n,Uid:7d10dbc5-96e1-44cd-b95c-193933bbd5fd,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345790878447644,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-qcp5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7d10dbc5-96e1-44cd-b95c-193933bbd5fd,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:49:50.551011078Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3e23135f39c50182e702d82990b5e3d6058ad82d66c71306447097c44e217e97,Metadata:&PodSandboxMetadata{Name:nginx,Uid:42f127f8-e477-4ebc-a82d-17e652b8be12,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1765345648278717349,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42f127f8-e477-4ebc-a82d-17e652b8be12,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:47:27.953352372Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45a6839320574ddc0b3647ef7ba2cfb61b1eec33740cea145b1de0f29d6081c9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:72587d64-2d5b-41de-bf62-e638cb2f27ce,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345604443902514,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72587d64-2d5b-41de-bf62-e638cb2f27ce,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:46:44.119438235Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4e86415a46de00bd5ec6
ecbe411afc1ae176fdf84d9561e462821b141d71c77,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-85d4c799dd-sx2zs,Uid:e7a892f9-8467-45d8-a61a-7e4bb306b290,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345589972621850,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-sx2zs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e7a892f9-8467-45d8-a61a-7e4bb306b290,pod-template-hash: 85d4c799dd,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:45:26.047554845Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f974c54d65eded48c365fb7188b84fdc5d0c1ba0aa4dd0e89de4af9ed9ecf07,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-2zqn8,Uid:b1438891-64ac-4c6f-bea2-d879f3bdb8c1,Namespace:ingress-nginx,Attempt:0,},St
ate:SANDBOX_NOTREADY,CreatedAt:1765345526965129132,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 8083c7e0-2c25-4bd5-b8d0-19daf1283daf,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 8083c7e0-2c25-4bd5-b8d0-19daf1283daf,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zqn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b1438891-64ac-4c6f-bea2-d879f3bdb8c1,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:45:26.226784404Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a22861f3c3aecf4b66c5479b030c60796312844a9afde68fc6083ac351992569,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-znwmw,Uid:c2420aef-0cc0-47f9-9e6b-c1b37d828efe,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1765345526945155958,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: b3826e6c-db38-4299-8c49-c02de70f61c8,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: b3826e6c-db38-4299-8c49-c02de70f61c8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-znwmw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2420aef-0cc0-47f9-9e6b-c1b37d828efe,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:45:26.208963922Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99153136e27134a532d4186f5010ad58e43043e5912955f5b30c8d593ea4dcf3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5b966211-0174-4fb4-9e9b-f1d1e31f9287,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345523597410446,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b966211-0174-4fb4-9e9b-f1d1e31f9287,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2025-12-10T05:45:23.246150088Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cf513d02143564653e39d7e60db33af8519d673442bd72868686cdc75834b968,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:6a7d26f9-194e-4f3d-80ce-17e899d2b880,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345522821149380,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7d26f9-194e-4f3d-80ce-17e899d2b880,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":
\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-12-10T05:45:22.491656603Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bf18171277de473f661ba85233f0587d973c79f8c438eaff9405924a3ec806c2,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-h2nzx,Uid:b2185212-d509-4fe1-8751-e03941bebb34,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:176534552104093
5931,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-h2nzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2185212-d509-4fe1-8751-e03941bebb34,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:45:20.698649583Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2bd93830ec6dcece30a176daad125f8f7fd1114199446b9ed642413722cfc6cc,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-pbs5w,Uid:7d50de10-1d73-4103-9acc-e63b30d392c8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345517634547185,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-pbs5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d50de10-1d73-4103-9acc-e63b30d392c8,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[st
ring]string{kubernetes.io/config.seen: 2025-12-10T05:45:17.292905185Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:843ab133210339163e1efa48ddc97ef8d5b2a49460069b8b375bde7644b8ef90,Metadata:&PodSandboxMetadata{Name:kube-proxy-xqvf9,Uid:0cb500e8-30a4-4914-913f-c184a12edd2e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345517392858921,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xqvf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb500e8-30a4-4914-913f-c184a12edd2e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:45:17.063388219Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a43ac33eabd9c78521362f59061e71af6906ece12913962593969faf527dbc5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-873698,Uid:58ee372554d36b076a20750ccc7289b6,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1765345505848559041,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ee372554d36b076a20750ccc7289b6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.151:8443,kubernetes.io/config.hash: 58ee372554d36b076a20750ccc7289b6,kubernetes.io/config.seen: 2025-12-10T05:45:05.318487145Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bbae4df47785e97545db384123b3a2fb48e0195d2ba9f1d46cc51baf78001924,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-873698,Uid:6aef92c9cf223bcc60ad2ead076eb534,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345505844563224,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-873698,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 6aef92c9cf223bcc60ad2ead076eb534,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6aef92c9cf223bcc60ad2ead076eb534,kubernetes.io/config.seen: 2025-12-10T05:45:05.318484877Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:084b5e00f276faeeea613ef5245d83dd48e887278ed2c0c5386d43bffd905d33,Metadata:&PodSandboxMetadata{Name:etcd-addons-873698,Uid:7e3697ee65a879491fcecb8487db9e3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345505842832009,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3697ee65a879491fcecb8487db9e3b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.151:2379,kubernetes.io/config.hash: 7e3697ee65a879491fcecb8487db9e3b,kubernetes.io/config.seen: 2025-12-10T05:45:05.318486068Z,kubernetes.io/con
fig.source: file,},RuntimeHandler:,},&PodSandbox{Id:0f013d0c2b1196dfe1c51cae2b78d13ef5e26d9bc2b7beaeb6403b882a443420,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-873698,Uid:3de5e357e3a5084c9fc7f3d2991810a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345505831100307,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de5e357e3a5084c9fc7f3d2991810a2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3de5e357e3a5084c9fc7f3d2991810a2,kubernetes.io/config.seen: 2025-12-10T05:45:05.318481183Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5dfd122e-b1db-43af-99e6-559ef0874378 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.732089886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d46ebabf-550a-4928-af27-35d6083d2525 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.732148511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d46ebabf-550a-4928-af27-35d6083d2525 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.732478149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e63ae1b5302a1cd9b60bc4bae4751b930e85b8b9d768733495ab7ab597a1f72c,PodSandboxId:3e23135f39c50182e702d82990b5e3d6058ad82d66c71306447097c44e217e97,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345648552847845,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42f127f8-e477-4ebc-a82d-17e652b8be12,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e2858c193d44246d891201b8061bf1f99b88234f5002155e9399993b548571,PodSandboxId:45a6839320574ddc0b3647ef7ba2cfb61b1eec33740cea145b1de0f29d6081c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765345607763921959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72587d64-2d5b-41de-bf62-e638cb2f27ce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0218f20cf1be5e4cfb7b96365079d5b86579f52faa60f23b59206f2ae58a9b8,PodSandboxId:b4e86415a46de00bd5ec6ecbe411afc1ae176fdf84d9561e462821b141d71c77,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765345596991419011,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-sx2zs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e7a892f9-8467-45d8-a61a-7e4bb306b290,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bca07a0d945e25272ac3e6c1ccb7d64fad7e8c06f1b61cbe779282a1631e1d84,PodSandboxId:1f974c54d65eded48c365fb7188b84fdc5d0c1ba0aa4dd0e89de4af9ed9ecf07,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765345577302265085,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zqn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b1438891-64ac-4c6f-bea2-d879f3bdb8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6922c0be775c8d7120e1ac1e554e037311e8b66ce3878e67746f8c55c86580a5,PodSandboxId:a22861f3c3aecf4b66c5479b030c60796312844a9afde68fc6083ac351992569,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765345576707840465,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-znwmw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2420aef-0cc0-47f9-9e6b-c1b37d828efe,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8e45c959d531d84508cfb0fc4edff254247917267a42436e7f949f0c39860,PodSandboxId:cf513d02143564653e39d7e60db33af8519d673442bd72868686cdc75834b968,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765345550068403676,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7d26f9-194e-4f3d-80ce-17e899d2b880,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9013237c28c082b727e26e539bf67c274e6ca134b00cf6190f410716e4dba0,PodSandboxId:bf18171277de473f661ba85233f0587d973c79f8c438eaff9405924a3ec806c2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765345534888088024,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h2nzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2185212-d509-4fe1-8751-e03941bebb34,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bebb370cf6139d32f1b9f95ef40bee5c06b728e8a94923fdea08996f6670503,PodSandboxId:99153136e27134a532d4186f5010ad58e43043e5912955f5b30c8d593ea4dcf3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345524507453140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b966211-0174-4fb4-9e9b-f1d1e31f9287,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6112c523be98c12c67175055791a72dc81e8ddf732c2e8ba3568b2dd6125e39a,PodSandboxId:2bd93830ec6dcece30a176daad125f8f7fd1114199446b9ed642413722cfc6cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345518669485988,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pbs5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d50de10-1d73-4103-9acc-e63b30d392c8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c58c9093844080658f9bec7e0524536cfa9366e7c565d5d6681fbe5ca9a6ccbf,PodSandboxId:843ab133210339163e1efa48ddc97ef8d5b2a49460069b8b375bde7644b8ef90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765345517871096990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqvf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb500e8-30a4-4914-913f-c184a12edd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:2fa109007bdd57bef1123a1784ef687a82af942a119c3d92e69a206fe9aaa52e,PodSandboxId:084b5e00f276faeeea613ef5245d83dd48e887278ed2c0c5386d43bffd905d33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345506098241105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3697ee65a879491fcecb8487db9e3b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be7049469bbf6ccca02c7f0bb57f85262de9af2e8fc817d9ac5f93fe4015c7,PodSandboxId:2a43ac33eabd9c78521362f59061e71af6906ece12913962593969faf527dbc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765345506048083213,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ee372554d36b076a20750ccc7289b6,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11371e84434c712ae3d54aebb373d2f904ae28abee15e363239c769fc88f745f,PodSandboxId:bbae4df47785e97545db384123b3a2fb48e0195d2ba9f1d46cc51baf78001924,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765345506033480802,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aef92c9cf223bcc60ad2ead076eb534,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"nam
e\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12a49740b8f5c6ac1cd1b86c7ed489886169bae5d025aa95309ccdfb80ee8ef1,PodSandboxId:0f013d0c2b1196dfe1c51cae2b78d13ef5e26d9bc2b7beaeb6403b882a443420,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765345506037533456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de5e357e
3a5084c9fc7f3d2991810a2,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d46ebabf-550a-4928-af27-35d6083d2525 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.733233714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a34fd9db-f4c2-4b9d-a62f-5835d7ba6002 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.733309520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a34fd9db-f4c2-4b9d-a62f-5835d7ba6002 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.733605896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e63ae1b5302a1cd9b60bc4bae4751b930e85b8b9d768733495ab7ab597a1f72c,PodSandboxId:3e23135f39c50182e702d82990b5e3d6058ad82d66c71306447097c44e217e97,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345648552847845,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42f127f8-e477-4ebc-a82d-17e652b8be12,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e2858c193d44246d891201b8061bf1f99b88234f5002155e9399993b548571,PodSandboxId:45a6839320574ddc0b3647ef7ba2cfb61b1eec33740cea145b1de0f29d6081c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765345607763921959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72587d64-2d5b-41de-bf62-e638cb2f27ce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0218f20cf1be5e4cfb7b96365079d5b86579f52faa60f23b59206f2ae58a9b8,PodSandboxId:b4e86415a46de00bd5ec6ecbe411afc1ae176fdf84d9561e462821b141d71c77,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765345596991419011,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-sx2zs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e7a892f9-8467-45d8-a61a-7e4bb306b290,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bca07a0d945e25272ac3e6c1ccb7d64fad7e8c06f1b61cbe779282a1631e1d84,PodSandboxId:1f974c54d65eded48c365fb7188b84fdc5d0c1ba0aa4dd0e89de4af9ed9ecf07,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765345577302265085,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zqn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b1438891-64ac-4c6f-bea2-d879f3bdb8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6922c0be775c8d7120e1ac1e554e037311e8b66ce3878e67746f8c55c86580a5,PodSandboxId:a22861f3c3aecf4b66c5479b030c60796312844a9afde68fc6083ac351992569,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765345576707840465,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-znwmw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2420aef-0cc0-47f9-9e6b-c1b37d828efe,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8e45c959d531d84508cfb0fc4edff254247917267a42436e7f949f0c39860,PodSandboxId:cf513d02143564653e39d7e60db33af8519d673442bd72868686cdc75834b968,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765345550068403676,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7d26f9-194e-4f3d-80ce-17e899d2b880,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9013237c28c082b727e26e539bf67c274e6ca134b00cf6190f410716e4dba0,PodSandboxId:bf18171277de473f661ba85233f0587d973c79f8c438eaff9405924a3ec806c2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765345534888088024,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h2nzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2185212-d509-4fe1-8751-e03941bebb34,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bebb370cf6139d32f1b9f95ef40bee5c06b728e8a94923fdea08996f6670503,PodSandboxId:99153136e27134a532d4186f5010ad58e43043e5912955f5b30c8d593ea4dcf3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345524507453140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b966211-0174-4fb4-9e9b-f1d1e31f9287,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6112c523be98c12c67175055791a72dc81e8ddf732c2e8ba3568b2dd6125e39a,PodSandboxId:2bd93830ec6dcece30a176daad125f8f7fd1114199446b9ed642413722cfc6cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345518669485988,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pbs5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d50de10-1d73-4103-9acc-e63b30d392c8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c58c9093844080658f9bec7e0524536cfa9366e7c565d5d6681fbe5ca9a6ccbf,PodSandboxId:843ab133210339163e1efa48ddc97ef8d5b2a49460069b8b375bde7644b8ef90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765345517871096990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqvf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb500e8-30a4-4914-913f-c184a12edd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:2fa109007bdd57bef1123a1784ef687a82af942a119c3d92e69a206fe9aaa52e,PodSandboxId:084b5e00f276faeeea613ef5245d83dd48e887278ed2c0c5386d43bffd905d33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345506098241105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3697ee65a879491fcecb8487db9e3b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be7049469bbf6ccca02c7f0bb57f85262de9af2e8fc817d9ac5f93fe4015c7,PodSandboxId:2a43ac33eabd9c78521362f59061e71af6906ece12913962593969faf527dbc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765345506048083213,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ee372554d36b076a20750ccc7289b6,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11371e84434c712ae3d54aebb373d2f904ae28abee15e363239c769fc88f745f,PodSandboxId:bbae4df47785e97545db384123b3a2fb48e0195d2ba9f1d46cc51baf78001924,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765345506033480802,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aef92c9cf223bcc60ad2ead076eb534,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"nam
e\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12a49740b8f5c6ac1cd1b86c7ed489886169bae5d025aa95309ccdfb80ee8ef1,PodSandboxId:0f013d0c2b1196dfe1c51cae2b78d13ef5e26d9bc2b7beaeb6403b882a443420,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765345506037533456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de5e357e
3a5084c9fc7f3d2991810a2,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a34fd9db-f4c2-4b9d-a62f-5835d7ba6002 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.735804016Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 7d10dbc5-96e1-44cd-b95c-193933bbd5fd,},},}" file="otel-collector/interceptors.go:62" id=71d75f4d-c7be-4506-be37-679086a0b5ed name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.736239706Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:11fa7b5c4c77d656c19da97d61e90b94d0a3f2dfed168ed3655566808df288d0,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-qcp5n,Uid:7d10dbc5-96e1-44cd-b95c-193933bbd5fd,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345790878447644,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-qcp5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7d10dbc5-96e1-44cd-b95c-193933bbd5fd,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:49:50.551011078Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=71d75f4d-c7be-4506-be37-679086a0b5ed name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.738112053Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:11fa7b5c4c77d656c19da97d61e90b94d0a3f2dfed168ed3655566808df288d0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=4e390ffe-2344-4cbf-804e-9647055fb2e6 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.738236380Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:11fa7b5c4c77d656c19da97d61e90b94d0a3f2dfed168ed3655566808df288d0,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-qcp5n,Uid:7d10dbc5-96e1-44cd-b95c-193933bbd5fd,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345790878447644,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:&UserNamespace{Mode:NODE,Uids:[]*IDMapping{},Gids:[]*IDMapping{},},},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-qcp5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7d10dbc5-96e1-44cd-b95c-193933bbd5fd,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-12-10T05:49:50.551011078Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=4e390ffe-2344-4cbf-804e-9647055fb2e6 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.741875224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 7d10dbc5-96e1-44cd-b95c-193933bbd5fd,},},}" file="otel-collector/interceptors.go:62" id=3fcb5738-0b03-44ea-ab8f-2f9a0a6cdf33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.741988349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fcb5738-0b03-44ea-ab8f-2f9a0a6cdf33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.742074772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3fcb5738-0b03-44ea-ab8f-2f9a0a6cdf33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.767878997Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13d1fe18-b4fe-467c-889d-8ff915abe8a5 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.767971405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13d1fe18-b4fe-467c-889d-8ff915abe8a5 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.770224557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba78b1b9-90a2-4553-9ca1-7c9422b86e6f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.773072918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345791773030740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545751,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba78b1b9-90a2-4553-9ca1-7c9422b86e6f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.777632939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cae0250e-1ffc-4296-a156-4f9eccd3323f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.777727531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cae0250e-1ffc-4296-a156-4f9eccd3323f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:49:51 addons-873698 crio[816]: time="2025-12-10 05:49:51.778064405Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e63ae1b5302a1cd9b60bc4bae4751b930e85b8b9d768733495ab7ab597a1f72c,PodSandboxId:3e23135f39c50182e702d82990b5e3d6058ad82d66c71306447097c44e217e97,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345648552847845,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42f127f8-e477-4ebc-a82d-17e652b8be12,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e2858c193d44246d891201b8061bf1f99b88234f5002155e9399993b548571,PodSandboxId:45a6839320574ddc0b3647ef7ba2cfb61b1eec33740cea145b1de0f29d6081c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765345607763921959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72587d64-2d5b-41de-bf62-e638cb2f27ce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0218f20cf1be5e4cfb7b96365079d5b86579f52faa60f23b59206f2ae58a9b8,PodSandboxId:b4e86415a46de00bd5ec6ecbe411afc1ae176fdf84d9561e462821b141d71c77,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765345596991419011,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-sx2zs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e7a892f9-8467-45d8-a61a-7e4bb306b290,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bca07a0d945e25272ac3e6c1ccb7d64fad7e8c06f1b61cbe779282a1631e1d84,PodSandboxId:1f974c54d65eded48c365fb7188b84fdc5d0c1ba0aa4dd0e89de4af9ed9ecf07,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765345577302265085,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zqn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b1438891-64ac-4c6f-bea2-d879f3bdb8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6922c0be775c8d7120e1ac1e554e037311e8b66ce3878e67746f8c55c86580a5,PodSandboxId:a22861f3c3aecf4b66c5479b030c60796312844a9afde68fc6083ac351992569,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765345576707840465,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-znwmw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2420aef-0cc0-47f9-9e6b-c1b37d828efe,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8e45c959d531d84508cfb0fc4edff254247917267a42436e7f949f0c39860,PodSandboxId:cf513d02143564653e39d7e60db33af8519d673442bd72868686cdc75834b968,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765345550068403676,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7d26f9-194e-4f3d-80ce-17e899d2b880,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9013237c28c082b727e26e539bf67c274e6ca134b00cf6190f410716e4dba0,PodSandboxId:bf18171277de473f661ba85233f0587d973c79f8c438eaff9405924a3ec806c2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765345534888088024,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h2nzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2185212-d509-4fe1-8751-e03941bebb34,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bebb370cf6139d32f1b9f95ef40bee5c06b728e8a94923fdea08996f6670503,PodSandboxId:99153136e27134a532d4186f5010ad58e43043e5912955f5b30c8d593ea4dcf3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345524507453140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b966211-0174-4fb4-9e9b-f1d1e31f9287,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6112c523be98c12c67175055791a72dc81e8ddf732c2e8ba3568b2dd6125e39a,PodSandboxId:2bd93830ec6dcece30a176daad125f8f7fd1114199446b9ed642413722cfc6cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345518669485988,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pbs5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d50de10-1d73-4103-9acc-e63b30d392c8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c58c9093844080658f9bec7e0524536cfa9366e7c565d5d6681fbe5ca9a6ccbf,PodSandboxId:843ab133210339163e1efa48ddc97ef8d5b2a49460069b8b375bde7644b8ef90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765345517871096990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqvf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb500e8-30a4-4914-913f-c184a12edd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:2fa109007bdd57bef1123a1784ef687a82af942a119c3d92e69a206fe9aaa52e,PodSandboxId:084b5e00f276faeeea613ef5245d83dd48e887278ed2c0c5386d43bffd905d33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345506098241105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3697ee65a879491fcecb8487db9e3b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be7049469bbf6ccca02c7f0bb57f85262de9af2e8fc817d9ac5f93fe4015c7,PodSandboxId:2a43ac33eabd9c78521362f59061e71af6906ece12913962593969faf527dbc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765345506048083213,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ee372554d36b076a20750ccc7289b6,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11371e84434c712ae3d54aebb373d2f904ae28abee15e363239c769fc88f745f,PodSandboxId:bbae4df47785e97545db384123b3a2fb48e0195d2ba9f1d46cc51baf78001924,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765345506033480802,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aef92c9cf223bcc60ad2ead076eb534,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"nam
e\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12a49740b8f5c6ac1cd1b86c7ed489886169bae5d025aa95309ccdfb80ee8ef1,PodSandboxId:0f013d0c2b1196dfe1c51cae2b78d13ef5e26d9bc2b7beaeb6403b882a443420,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765345506037533456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-873698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de5e357e
3a5084c9fc7f3d2991810a2,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cae0250e-1ffc-4296-a156-4f9eccd3323f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e63ae1b5302a1       d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9                                                             2 minutes ago       Running             nginx                     0                   3e23135f39c50       nginx                                       default
	47e2858c193d4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   45a6839320574       busybox                                     default
	d0218f20cf1be       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   b4e86415a46de       ingress-nginx-controller-85d4c799dd-sx2zs   ingress-nginx
	bca07a0d945e2       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                             3 minutes ago       Exited              patch                     1                   1f974c54d65ed       ingress-nginx-admission-patch-2zqn8         ingress-nginx
	6922c0be775c8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   a22861f3c3aec       ingress-nginx-admission-create-znwmw        ingress-nginx
	c9e8e45c959d5       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   cf513d0214356       kube-ingress-dns-minikube                   kube-system
	cf9013237c28c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   bf18171277de4       amd-gpu-device-plugin-h2nzx                 kube-system
	7bebb370cf613       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   99153136e2713       storage-provisioner                         kube-system
	6112c523be98c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   2bd93830ec6dc       coredns-66bc5c9577-pbs5w                    kube-system
	c58c909384408       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   843ab13321033       kube-proxy-xqvf9                            kube-system
	2fa109007bdd5       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   084b5e00f276f       etcd-addons-873698                          kube-system
	d5be7049469bb       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   2a43ac33eabd9       kube-apiserver-addons-873698                kube-system
	12a49740b8f5c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   0f013d0c2b119       kube-controller-manager-addons-873698       kube-system
	11371e84434c7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   bbae4df47785e       kube-scheduler-addons-873698                kube-system
	
	
	==> coredns [6112c523be98c12c67175055791a72dc81e8ddf732c2e8ba3568b2dd6125e39a] <==
	[INFO] 10.244.0.8:47989 - 46051 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000126348s
	[INFO] 10.244.0.8:47989 - 947 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000115219s
	[INFO] 10.244.0.8:47989 - 40926 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00008056s
	[INFO] 10.244.0.8:47989 - 32435 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000145779s
	[INFO] 10.244.0.8:47989 - 20191 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000266286s
	[INFO] 10.244.0.8:47989 - 39070 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000111131s
	[INFO] 10.244.0.8:47989 - 35985 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000099366s
	[INFO] 10.244.0.8:52564 - 57680 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154101s
	[INFO] 10.244.0.8:52564 - 57974 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152053s
	[INFO] 10.244.0.8:41825 - 59535 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000182036s
	[INFO] 10.244.0.8:41825 - 59264 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000160817s
	[INFO] 10.244.0.8:52254 - 23739 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007367s
	[INFO] 10.244.0.8:52254 - 23536 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000294008s
	[INFO] 10.244.0.8:35975 - 33527 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083709s
	[INFO] 10.244.0.8:35975 - 33111 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000233862s
	[INFO] 10.244.0.23:57731 - 53320 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000716768s
	[INFO] 10.244.0.23:57152 - 3194 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000144912s
	[INFO] 10.244.0.23:51995 - 2502 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000211929s
	[INFO] 10.244.0.23:51816 - 40709 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122849s
	[INFO] 10.244.0.23:52838 - 20927 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096689s
	[INFO] 10.244.0.23:35774 - 47779 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118562s
	[INFO] 10.244.0.23:33725 - 57288 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00144645s
	[INFO] 10.244.0.23:56657 - 61387 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.001278461s
	[INFO] 10.244.0.28:35047 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001438049s
	[INFO] 10.244.0.28:50370 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00021027s
	
	
	==> describe nodes <==
	Name:               addons-873698
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-873698
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=addons-873698
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T05_45_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-873698
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 05:45:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-873698
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 05:49:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 05:47:45 +0000   Wed, 10 Dec 2025 05:45:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 05:47:45 +0000   Wed, 10 Dec 2025 05:45:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 05:47:45 +0000   Wed, 10 Dec 2025 05:45:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 05:47:45 +0000   Wed, 10 Dec 2025 05:45:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.151
	  Hostname:    addons-873698
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb6628511838461b8c30e421e58da7d5
	  System UUID:                eb662851-1838-461b-8c30-e421e58da7d5
	  Boot ID:                    e0a6be60-7ee3-4519-9aca-96725d41768e
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-5d498dc89-qcp5n              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-sx2zs    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-h2nzx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 coredns-66bc5c9577-pbs5w                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m35s
	  kube-system                 etcd-addons-873698                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m41s
	  kube-system                 kube-apiserver-addons-873698                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-controller-manager-addons-873698        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-proxy-xqvf9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-scheduler-addons-873698                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m32s  kube-proxy       
	  Normal  Starting                 4m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s  kubelet          Node addons-873698 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s  kubelet          Node addons-873698 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s  kubelet          Node addons-873698 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m40s  kubelet          Node addons-873698 status is now: NodeReady
	  Normal  RegisteredNode           4m36s  node-controller  Node addons-873698 event: Registered Node addons-873698 in Controller
	
	
	==> dmesg <==
	[  +0.000023] kauditd_printk_skb: 365 callbacks suppressed
	[  +4.004034] kauditd_printk_skb: 323 callbacks suppressed
	[  +5.834077] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.342498] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.923652] kauditd_printk_skb: 20 callbacks suppressed
	[Dec10 05:46] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.698793] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.045277] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.991088] kauditd_printk_skb: 141 callbacks suppressed
	[  +1.342271] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.000028] kauditd_printk_skb: 50 callbacks suppressed
	[  +0.000045] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.383827] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.318245] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.677438] kauditd_printk_skb: 17 callbacks suppressed
	[Dec10 05:47] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000034] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.237512] kauditd_printk_skb: 102 callbacks suppressed
	[  +2.472858] kauditd_printk_skb: 148 callbacks suppressed
	[  +5.559854] kauditd_printk_skb: 120 callbacks suppressed
	[  +4.023493] kauditd_printk_skb: 84 callbacks suppressed
	[  +0.000747] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.586313] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.000044] kauditd_printk_skb: 132 callbacks suppressed
	[Dec10 05:49] kauditd_printk_skb: 37 callbacks suppressed
	
	
	==> etcd [2fa109007bdd57bef1123a1784ef687a82af942a119c3d92e69a206fe9aaa52e] <==
	{"level":"warn","ts":"2025-12-10T05:46:00.172803Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.365538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-pbs5w\" limit:1 ","response":"range_response_count:1 size:5631"}
	{"level":"info","ts":"2025-12-10T05:46:00.173621Z","caller":"traceutil/trace.go:172","msg":"trace[602627500] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-pbs5w; range_end:; response_count:1; response_revision:953; }","duration":"140.01714ms","start":"2025-12-10T05:46:00.033413Z","end":"2025-12-10T05:46:00.173430Z","steps":["trace[602627500] 'range keys from in-memory index tree'  (duration: 139.159474ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:46:08.639064Z","caller":"traceutil/trace.go:172","msg":"trace[904051361] transaction","detail":"{read_only:false; response_revision:981; number_of_response:1; }","duration":"109.340294ms","start":"2025-12-10T05:46:08.529711Z","end":"2025-12-10T05:46:08.639052Z","steps":["trace[904051361] 'process raft request'  (duration: 108.409761ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:46:14.798487Z","caller":"traceutil/trace.go:172","msg":"trace[401251675] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"106.475529ms","start":"2025-12-10T05:46:14.691998Z","end":"2025-12-10T05:46:14.798474Z","steps":["trace[401251675] 'process raft request'  (duration: 106.395852ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:46:16.648521Z","caller":"traceutil/trace.go:172","msg":"trace[427901699] transaction","detail":"{read_only:false; response_revision:1017; number_of_response:1; }","duration":"240.267117ms","start":"2025-12-10T05:46:16.408241Z","end":"2025-12-10T05:46:16.648508Z","steps":["trace[427901699] 'process raft request'  (duration: 240.111365ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:46:16.648643Z","caller":"traceutil/trace.go:172","msg":"trace[1905403530] linearizableReadLoop","detail":"{readStateIndex:1045; appliedIndex:1046; }","duration":"219.728997ms","start":"2025-12-10T05:46:16.428886Z","end":"2025-12-10T05:46:16.648615Z","steps":["trace[1905403530] 'read index received'  (duration: 219.666576ms)","trace[1905403530] 'applied index is now lower than readState.Index'  (duration: 61.7µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:46:16.649070Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"216.051244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:46:16.649117Z","caller":"traceutil/trace.go:172","msg":"trace[1470881165] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:1017; }","duration":"216.105634ms","start":"2025-12-10T05:46:16.433004Z","end":"2025-12-10T05:46:16.649109Z","steps":["trace[1470881165] 'agreement among raft nodes before linearized reading'  (duration: 216.034397ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:46:16.649313Z","caller":"traceutil/trace.go:172","msg":"trace[228979555] transaction","detail":"{read_only:false; response_revision:1018; number_of_response:1; }","duration":"203.992705ms","start":"2025-12-10T05:46:16.445313Z","end":"2025-12-10T05:46:16.649306Z","steps":["trace[228979555] 'process raft request'  (duration: 203.939042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:46:16.651320Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"221.886556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-2zqn8\" limit:1 ","response":"range_response_count:1 size:4387"}
	{"level":"info","ts":"2025-12-10T05:46:16.656868Z","caller":"traceutil/trace.go:172","msg":"trace[1359341641] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-2zqn8; range_end:; response_count:1; response_revision:1017; }","duration":"227.965729ms","start":"2025-12-10T05:46:16.428883Z","end":"2025-12-10T05:46:16.656848Z","steps":["trace[1359341641] 'agreement among raft nodes before linearized reading'  (duration: 219.829078ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:46:21.578960Z","caller":"traceutil/trace.go:172","msg":"trace[445121886] linearizableReadLoop","detail":"{readStateIndex:1098; appliedIndex:1098; }","duration":"237.447441ms","start":"2025-12-10T05:46:21.341488Z","end":"2025-12-10T05:46:21.578935Z","steps":["trace[445121886] 'read index received'  (duration: 237.441407ms)","trace[445121886] 'applied index is now lower than readState.Index'  (duration: 5.155µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:46:21.579172Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.662447ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:46:21.579196Z","caller":"traceutil/trace.go:172","msg":"trace[1674393569] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"237.702108ms","start":"2025-12-10T05:46:21.341484Z","end":"2025-12-10T05:46:21.579186Z","steps":["trace[1674393569] 'agreement among raft nodes before linearized reading'  (duration: 237.638325ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:46:21.579244Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.742718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:46:21.579274Z","caller":"traceutil/trace.go:172","msg":"trace[826363149] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"237.779328ms","start":"2025-12-10T05:46:21.341490Z","end":"2025-12-10T05:46:21.579269Z","steps":["trace[826363149] 'agreement among raft nodes before linearized reading'  (duration: 237.731112ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:46:21.579406Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.860307ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:46:21.579424Z","caller":"traceutil/trace.go:172","msg":"trace[846615802] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1069; }","duration":"105.878376ms","start":"2025-12-10T05:46:21.473541Z","end":"2025-12-10T05:46:21.579419Z","steps":["trace[846615802] 'agreement among raft nodes before linearized reading'  (duration: 105.847851ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:46:21.579057Z","caller":"traceutil/trace.go:172","msg":"trace[2029815111] transaction","detail":"{read_only:false; response_revision:1069; number_of_response:1; }","duration":"251.154341ms","start":"2025-12-10T05:46:21.327892Z","end":"2025-12-10T05:46:21.579046Z","steps":["trace[2029815111] 'process raft request'  (duration: 251.067986ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:46:31.894769Z","caller":"traceutil/trace.go:172","msg":"trace[1608880673] transaction","detail":"{read_only:false; response_revision:1134; number_of_response:1; }","duration":"122.299756ms","start":"2025-12-10T05:46:31.772455Z","end":"2025-12-10T05:46:31.894754Z","steps":["trace[1608880673] 'process raft request'  (duration: 122.167706ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:46:33.386603Z","caller":"traceutil/trace.go:172","msg":"trace[940580572] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"126.137796ms","start":"2025-12-10T05:46:33.260413Z","end":"2025-12-10T05:46:33.386551Z","steps":["trace[940580572] 'process raft request'  (duration: 125.990028ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:46:35.201506Z","caller":"traceutil/trace.go:172","msg":"trace[711290532] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"201.73172ms","start":"2025-12-10T05:46:34.999735Z","end":"2025-12-10T05:46:35.201467Z","steps":["trace[711290532] 'process raft request'  (duration: 201.627941ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:46:36.649500Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.887904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:46:36.649798Z","caller":"traceutil/trace.go:172","msg":"trace[423810890] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:1142; }","duration":"241.259171ms","start":"2025-12-10T05:46:36.408526Z","end":"2025-12-10T05:46:36.649785Z","steps":["trace[423810890] 'range keys from in-memory index tree'  (duration: 240.797245ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:47:09.815421Z","caller":"traceutil/trace.go:172","msg":"trace[1542983813] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1349; }","duration":"152.572383ms","start":"2025-12-10T05:47:09.662836Z","end":"2025-12-10T05:47:09.815408Z","steps":["trace[1542983813] 'process raft request'  (duration: 152.490606ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:49:52 up 5 min,  0 users,  load average: 0.60, 1.25, 0.64
	Linux addons-873698 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [d5be7049469bbf6ccca02c7f0bb57f85262de9af2e8fc817d9ac5f93fe4015c7] <==
	E1210 05:46:12.063290       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.108.110:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.108.110:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.108.110:443: connect: connection refused" logger="UnhandledError"
	I1210 05:46:12.099318       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 05:46:12.121136       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1210 05:46:54.298134       1 conn.go:339] Error on socket receive: read tcp 192.168.39.151:8443->192.168.39.1:47306: use of closed network connection
	E1210 05:46:54.489894       1 conn.go:339] Error on socket receive: read tcp 192.168.39.151:8443->192.168.39.1:47344: use of closed network connection
	I1210 05:47:03.828024       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.33.45"}
	I1210 05:47:27.812995       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 05:47:27.994271       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.141.4"}
	I1210 05:47:31.295244       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1210 05:47:33.592698       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1210 05:47:51.801024       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:47:51.801127       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:47:51.844338       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:47:51.844432       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:47:51.865542       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:47:51.865663       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:47:51.900168       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:47:51.900220       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:47:51.919639       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:47:51.919782       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1210 05:47:52.900494       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1210 05:47:52.920004       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1210 05:47:52.931299       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I1210 05:48:13.082464       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1210 05:49:50.658717       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.183.142"}
	
	
	==> kube-controller-manager [12a49740b8f5c6ac1cd1b86c7ed489886169bae5d025aa95309ccdfb80ee8ef1] <==
	E1210 05:48:07.834311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:48:11.913652       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:48:11.914692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:48:12.758221       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:48:12.759213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1210 05:48:16.282751       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 05:48:16.282787       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:48:16.358312       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 05:48:16.358355       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1210 05:48:21.447363       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:48:21.448357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:48:26.081028       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:48:26.082186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:48:33.462223       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:48:33.464034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:49:01.986941       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:49:01.988504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:49:03.456323       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:49:03.457391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:49:05.316629       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:49:05.318502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:49:35.375487       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:49:35.376756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:49:48.854869       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:49:48.856261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [c58c9093844080658f9bec7e0524536cfa9366e7c565d5d6681fbe5ca9a6ccbf] <==
	I1210 05:45:18.923976       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:45:19.026383       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:45:19.026661       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.151"]
	E1210 05:45:19.028074       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:45:19.238961       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 05:45:19.239011       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 05:45:19.239032       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:45:19.255513       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:45:19.255796       1 server.go:527] "Version info" version="v1.34.2"
	I1210 05:45:19.255807       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:45:19.264760       1 config.go:200] "Starting service config controller"
	I1210 05:45:19.264774       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:45:19.264816       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:45:19.264820       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:45:19.264831       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:45:19.264834       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:45:19.265439       1 config.go:309] "Starting node config controller"
	I1210 05:45:19.265445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:45:19.265449       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:45:19.365887       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 05:45:19.366728       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:45:19.366820       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [11371e84434c712ae3d54aebb373d2f904ae28abee15e363239c769fc88f745f] <==
	E1210 05:45:09.063202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:45:09.063264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:45:09.063288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:45:09.063300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:45:09.063389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:45:09.063446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:45:09.063646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:45:09.063700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:45:09.063768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:45:09.063780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 05:45:09.063878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 05:45:09.063895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:45:09.063909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:45:09.064053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 05:45:09.064128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:45:09.889558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 05:45:09.907195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 05:45:09.988960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:45:10.011734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 05:45:10.232799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:45:10.253884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:45:10.285332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:45:10.335450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:45:10.352262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1210 05:45:13.054304       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 05:48:14 addons-873698 kubelet[1518]: I1210 05:48:14.158700    1518 scope.go:117] "RemoveContainer" containerID="2bb74dfb4ecd025d365f0fcae6fece6c56bd20b3e7a2217c5f2ecfab35c173bf"
	Dec 10 05:48:22 addons-873698 kubelet[1518]: E1210 05:48:22.056630    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345702056289083 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:48:22 addons-873698 kubelet[1518]: E1210 05:48:22.056667    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345702056289083 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:48:23 addons-873698 kubelet[1518]: I1210 05:48:23.887560    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-h2nzx" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:48:32 addons-873698 kubelet[1518]: E1210 05:48:32.059292    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345712058836325 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:48:32 addons-873698 kubelet[1518]: E1210 05:48:32.059318    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345712058836325 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:48:42 addons-873698 kubelet[1518]: E1210 05:48:42.061949    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345722061532783 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:48:42 addons-873698 kubelet[1518]: E1210 05:48:42.062012    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345722061532783 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:48:52 addons-873698 kubelet[1518]: E1210 05:48:52.064940    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345732064401056 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:48:52 addons-873698 kubelet[1518]: E1210 05:48:52.065005    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345732064401056 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:02 addons-873698 kubelet[1518]: E1210 05:49:02.068495    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345742067953729 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:02 addons-873698 kubelet[1518]: E1210 05:49:02.068534    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345742067953729 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:07 addons-873698 kubelet[1518]: I1210 05:49:07.887349    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:49:12 addons-873698 kubelet[1518]: E1210 05:49:12.071935    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345752071506285 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:12 addons-873698 kubelet[1518]: E1210 05:49:12.071984    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345752071506285 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:22 addons-873698 kubelet[1518]: E1210 05:49:22.075931    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345762075475824 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:22 addons-873698 kubelet[1518]: E1210 05:49:22.075959    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345762075475824 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:29 addons-873698 kubelet[1518]: I1210 05:49:29.891556    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-h2nzx" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:49:32 addons-873698 kubelet[1518]: E1210 05:49:32.078494    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345772078141440 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:32 addons-873698 kubelet[1518]: E1210 05:49:32.078516    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345772078141440 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:42 addons-873698 kubelet[1518]: E1210 05:49:42.082212    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345782081847398 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:42 addons-873698 kubelet[1518]: E1210 05:49:42.082252    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345782081847398 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:50 addons-873698 kubelet[1518]: I1210 05:49:50.680173    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxxq\" (UniqueName: \"kubernetes.io/projected/7d10dbc5-96e1-44cd-b95c-193933bbd5fd-kube-api-access-mzxxq\") pod \"hello-world-app-5d498dc89-qcp5n\" (UID: \"7d10dbc5-96e1-44cd-b95c-193933bbd5fd\") " pod="default/hello-world-app-5d498dc89-qcp5n"
	Dec 10 05:49:52 addons-873698 kubelet[1518]: E1210 05:49:52.086924    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345792085736697 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 10 05:49:52 addons-873698 kubelet[1518]: E1210 05:49:52.087223    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345792085736697 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	
	
	==> storage-provisioner [7bebb370cf6139d32f1b9f95ef40bee5c06b728e8a94923fdea08996f6670503] <==
	W1210 05:49:26.314813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:28.318153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:28.326147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:30.329977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:30.335819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:32.339488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:32.344326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:34.349856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:34.355300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:36.359249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:36.368407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:38.371502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:38.377088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:40.380929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:40.386269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:42.389692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:42.395729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:44.399033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:44.407506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:46.411792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:46.416960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:48.420680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:48.425494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:50.429981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:49:50.438726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-873698 -n addons-873698
helpers_test.go:270: (dbg) Run:  kubectl --context addons-873698 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-qcp5n ingress-nginx-admission-create-znwmw ingress-nginx-admission-patch-2zqn8
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-873698 describe pod hello-world-app-5d498dc89-qcp5n ingress-nginx-admission-create-znwmw ingress-nginx-admission-patch-2zqn8
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-873698 describe pod hello-world-app-5d498dc89-qcp5n ingress-nginx-admission-create-znwmw ingress-nginx-admission-patch-2zqn8: exit status 1 (70.448242ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-qcp5n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-873698/192.168.39.151
	Start Time:       Wed, 10 Dec 2025 05:49:50 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mzxxq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mzxxq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-qcp5n to addons-873698
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-znwmw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2zqn8" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-873698 describe pod hello-world-app-5d498dc89-qcp5n ingress-nginx-admission-create-znwmw ingress-nginx-admission-patch-2zqn8: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-873698 addons disable ingress-dns --alsologtostderr -v=1: (1.009903344s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-873698 addons disable ingress --alsologtostderr -v=1: (7.739685496s)
--- FAIL: TestAddons/parallel/Ingress (154.04s)

                                                
                                    
x
+
TestPreload (120.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-741260 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1210 06:37:35.116337   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-741260 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m3.912252643s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-741260 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-741260 image pull gcr.io/k8s-minikube/busybox: (3.356707597s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-741260
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-741260: (6.742833428s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-741260 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-741260 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (43.413926751s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-741260 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-10 06:39:22.940088396 +0000 UTC m=+3335.524606511
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-741260 -n test-preload-741260
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-741260 logs -n 25
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-140746 ssh -n multinode-140746-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:26 UTC │ 10 Dec 25 06:26 UTC │
	│ ssh     │ multinode-140746 ssh -n multinode-140746 sudo cat /home/docker/cp-test_multinode-140746-m03_multinode-140746.txt                                          │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:26 UTC │ 10 Dec 25 06:26 UTC │
	│ cp      │ multinode-140746 cp multinode-140746-m03:/home/docker/cp-test.txt multinode-140746-m02:/home/docker/cp-test_multinode-140746-m03_multinode-140746-m02.txt │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:26 UTC │ 10 Dec 25 06:26 UTC │
	│ ssh     │ multinode-140746 ssh -n multinode-140746-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:26 UTC │ 10 Dec 25 06:26 UTC │
	│ ssh     │ multinode-140746 ssh -n multinode-140746-m02 sudo cat /home/docker/cp-test_multinode-140746-m03_multinode-140746-m02.txt                                  │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:26 UTC │ 10 Dec 25 06:26 UTC │
	│ node    │ multinode-140746 node stop m03                                                                                                                            │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:26 UTC │ 10 Dec 25 06:26 UTC │
	│ node    │ multinode-140746 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:26 UTC │ 10 Dec 25 06:27 UTC │
	│ node    │ list -p multinode-140746                                                                                                                                  │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:27 UTC │                     │
	│ stop    │ -p multinode-140746                                                                                                                                       │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:27 UTC │ 10 Dec 25 06:30 UTC │
	│ start   │ -p multinode-140746 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:30 UTC │ 10 Dec 25 06:32 UTC │
	│ node    │ list -p multinode-140746                                                                                                                                  │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:32 UTC │                     │
	│ node    │ multinode-140746 node delete m03                                                                                                                          │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:32 UTC │ 10 Dec 25 06:32 UTC │
	│ stop    │ multinode-140746 stop                                                                                                                                     │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:32 UTC │ 10 Dec 25 06:35 UTC │
	│ start   │ -p multinode-140746 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:35 UTC │ 10 Dec 25 06:36 UTC │
	│ node    │ list -p multinode-140746                                                                                                                                  │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:36 UTC │                     │
	│ start   │ -p multinode-140746-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-140746-m02 │ jenkins │ v1.37.0 │ 10 Dec 25 06:36 UTC │                     │
	│ start   │ -p multinode-140746-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-140746-m03 │ jenkins │ v1.37.0 │ 10 Dec 25 06:36 UTC │ 10 Dec 25 06:37 UTC │
	│ node    │ add -p multinode-140746                                                                                                                                   │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │                     │
	│ delete  │ -p multinode-140746-m03                                                                                                                                   │ multinode-140746-m03 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ delete  │ -p multinode-140746                                                                                                                                       │ multinode-140746     │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ start   │ -p test-preload-741260 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-741260  │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:38 UTC │
	│ image   │ test-preload-741260 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-741260  │ jenkins │ v1.37.0 │ 10 Dec 25 06:38 UTC │ 10 Dec 25 06:38 UTC │
	│ stop    │ -p test-preload-741260                                                                                                                                    │ test-preload-741260  │ jenkins │ v1.37.0 │ 10 Dec 25 06:38 UTC │ 10 Dec 25 06:38 UTC │
	│ start   │ -p test-preload-741260 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-741260  │ jenkins │ v1.37.0 │ 10 Dec 25 06:38 UTC │ 10 Dec 25 06:39 UTC │
	│ image   │ test-preload-741260 image list                                                                                                                            │ test-preload-741260  │ jenkins │ v1.37.0 │ 10 Dec 25 06:39 UTC │ 10 Dec 25 06:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:38:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:38:39.392811   38273 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:38:39.393043   38273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:39.393052   38273 out.go:374] Setting ErrFile to fd 2...
	I1210 06:38:39.393056   38273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:39.393281   38273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:38:39.393739   38273 out.go:368] Setting JSON to false
	I1210 06:38:39.394584   38273 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4863,"bootTime":1765343856,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:38:39.394643   38273 start.go:143] virtualization: kvm guest
	I1210 06:38:39.396969   38273 out.go:179] * [test-preload-741260] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:38:39.398278   38273 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:38:39.398344   38273 notify.go:221] Checking for updates...
	I1210 06:38:39.400756   38273 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:38:39.402070   38273 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:38:39.403437   38273 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 06:38:39.404639   38273 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:38:39.405872   38273 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:38:39.407575   38273 config.go:182] Loaded profile config "test-preload-741260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:38:39.408044   38273 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:38:39.442912   38273 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 06:38:39.444076   38273 start.go:309] selected driver: kvm2
	I1210 06:38:39.444095   38273 start.go:927] validating driver "kvm2" against &{Name:test-preload-741260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-741260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:39.444192   38273 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:38:39.445170   38273 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:38:39.445195   38273 cni.go:84] Creating CNI manager for ""
	I1210 06:38:39.445246   38273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:38:39.445295   38273 start.go:353] cluster config:
	{Name:test-preload-741260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-741260 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:39.445391   38273 iso.go:125] acquiring lock: {Name:mk873366a783b9f735599145b1cff21faf50318e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:39.447391   38273 out.go:179] * Starting "test-preload-741260" primary control-plane node in "test-preload-741260" cluster
	I1210 06:38:39.448492   38273 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:38:39.448529   38273 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 06:38:39.448547   38273 cache.go:65] Caching tarball of preloaded images
	I1210 06:38:39.448631   38273 preload.go:238] Found /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:38:39.448643   38273 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 06:38:39.448742   38273 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/config.json ...
	I1210 06:38:39.448931   38273 start.go:360] acquireMachinesLock for test-preload-741260: {Name:mkc15d5369b31c34b8a5517a09471706fa3f291a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 06:38:39.448975   38273 start.go:364] duration metric: took 26.26µs to acquireMachinesLock for "test-preload-741260"
	I1210 06:38:39.448986   38273 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:38:39.448990   38273 fix.go:54] fixHost starting: 
	I1210 06:38:39.450566   38273 fix.go:112] recreateIfNeeded on test-preload-741260: state=Stopped err=<nil>
	W1210 06:38:39.450594   38273 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:38:39.452887   38273 out.go:252] * Restarting existing kvm2 VM for "test-preload-741260" ...
	I1210 06:38:39.452910   38273 main.go:143] libmachine: starting domain...
	I1210 06:38:39.452918   38273 main.go:143] libmachine: ensuring networks are active...
	I1210 06:38:39.453690   38273 main.go:143] libmachine: Ensuring network default is active
	I1210 06:38:39.454229   38273 main.go:143] libmachine: Ensuring network mk-test-preload-741260 is active
	I1210 06:38:39.454813   38273 main.go:143] libmachine: getting domain XML...
	I1210 06:38:39.456052   38273 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-741260</name>
	  <uuid>6a05c5f1-b7f1-4b0b-a230-aa6124beb5e6</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/test-preload-741260/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/test-preload-741260/test-preload-741260.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:6c:07:d9'/>
	      <source network='mk-test-preload-741260'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9c:03:e7'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 06:38:40.722247   38273 main.go:143] libmachine: waiting for domain to start...
	I1210 06:38:40.723658   38273 main.go:143] libmachine: domain is now running
	I1210 06:38:40.723678   38273 main.go:143] libmachine: waiting for IP...
	I1210 06:38:40.724593   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:40.725092   38273 main.go:143] libmachine: domain test-preload-741260 has current primary IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:40.725106   38273 main.go:143] libmachine: found domain IP: 192.168.39.150
	I1210 06:38:40.725111   38273 main.go:143] libmachine: reserving static IP address...
	I1210 06:38:40.725461   38273 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-741260", mac: "52:54:00:6c:07:d9", ip: "192.168.39.150"} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:37:40 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:40.725486   38273 main.go:143] libmachine: skip adding static IP to network mk-test-preload-741260 - found existing host DHCP lease matching {name: "test-preload-741260", mac: "52:54:00:6c:07:d9", ip: "192.168.39.150"}
	I1210 06:38:40.725497   38273 main.go:143] libmachine: reserved static IP address 192.168.39.150 for domain test-preload-741260
	I1210 06:38:40.725505   38273 main.go:143] libmachine: waiting for SSH...
	I1210 06:38:40.725516   38273 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 06:38:40.727853   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:40.728237   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:37:40 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:40.728265   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:40.728471   38273 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:40.728716   38273 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1210 06:38:40.728728   38273 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 06:38:43.838744   38273 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.150:22: connect: no route to host
	I1210 06:38:49.918770   38273 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.150:22: connect: no route to host
	I1210 06:38:53.031907   38273 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:38:53.035446   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.035809   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:53.035828   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.036021   38273 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/config.json ...
	I1210 06:38:53.036234   38273 machine.go:94] provisionDockerMachine start ...
	I1210 06:38:53.038561   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.038869   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:53.038889   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.039030   38273 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:53.039262   38273 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1210 06:38:53.039274   38273 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:38:53.150083   38273 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 06:38:53.150110   38273 buildroot.go:166] provisioning hostname "test-preload-741260"
	I1210 06:38:53.153090   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.153669   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:53.153703   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.153910   38273 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:53.154147   38273 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1210 06:38:53.154162   38273 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-741260 && echo "test-preload-741260" | sudo tee /etc/hostname
	I1210 06:38:53.283641   38273 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-741260
	
	I1210 06:38:53.287008   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.287464   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:53.287497   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.287697   38273 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:53.287910   38273 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1210 06:38:53.287925   38273 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-741260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-741260/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-741260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:38:53.410004   38273 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:38:53.410028   38273 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8667/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8667/.minikube}
	I1210 06:38:53.410046   38273 buildroot.go:174] setting up certificates
	I1210 06:38:53.410055   38273 provision.go:84] configureAuth start
	I1210 06:38:53.413015   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.413458   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:53.413482   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.416005   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.416428   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:53.416459   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.416607   38273 provision.go:143] copyHostCerts
	I1210 06:38:53.416673   38273 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem, removing ...
	I1210 06:38:53.416691   38273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem
	I1210 06:38:53.416788   38273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem (1082 bytes)
	I1210 06:38:53.416913   38273 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem, removing ...
	I1210 06:38:53.416927   38273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem
	I1210 06:38:53.416972   38273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem (1123 bytes)
	I1210 06:38:53.417053   38273 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem, removing ...
	I1210 06:38:53.417063   38273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem
	I1210 06:38:53.417102   38273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem (1675 bytes)
	I1210 06:38:53.417171   38273 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem org=jenkins.test-preload-741260 san=[127.0.0.1 192.168.39.150 localhost minikube test-preload-741260]
	I1210 06:38:53.493575   38273 provision.go:177] copyRemoteCerts
	I1210 06:38:53.493647   38273 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:38:53.496269   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.496721   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:53.496749   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.496948   38273 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/test-preload-741260/id_rsa Username:docker}
	I1210 06:38:53.584077   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:38:53.614234   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 06:38:53.644611   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:38:53.676402   38273 provision.go:87] duration metric: took 266.321519ms to configureAuth
	I1210 06:38:53.676436   38273 buildroot.go:189] setting minikube options for container-runtime
	I1210 06:38:53.676606   38273 config.go:182] Loaded profile config "test-preload-741260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:38:53.679693   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.680150   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:53.680181   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.680413   38273 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:53.680683   38273 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1210 06:38:53.680701   38273 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:38:53.942544   38273 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:38:53.942571   38273 machine.go:97] duration metric: took 906.322837ms to provisionDockerMachine
	I1210 06:38:53.942583   38273 start.go:293] postStartSetup for "test-preload-741260" (driver="kvm2")
	I1210 06:38:53.942593   38273 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:38:53.942676   38273 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:38:53.945915   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.946444   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:53.946503   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:53.946706   38273 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/test-preload-741260/id_rsa Username:docker}
	I1210 06:38:54.033581   38273 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:38:54.038540   38273 info.go:137] Remote host: Buildroot 2025.02
	I1210 06:38:54.038568   38273 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/addons for local assets ...
	I1210 06:38:54.038628   38273 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/files for local assets ...
	I1210 06:38:54.038734   38273 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem -> 125882.pem in /etc/ssl/certs
	I1210 06:38:54.038879   38273 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:38:54.050412   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem --> /etc/ssl/certs/125882.pem (1708 bytes)
	I1210 06:38:54.079211   38273 start.go:296] duration metric: took 136.610479ms for postStartSetup
	I1210 06:38:54.079254   38273 fix.go:56] duration metric: took 14.630263671s for fixHost
	I1210 06:38:54.081920   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:54.082318   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:54.082365   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:54.082539   38273 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:54.082781   38273 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1210 06:38:54.082794   38273 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 06:38:54.195592   38273 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765348734.157447120
	
	I1210 06:38:54.195616   38273 fix.go:216] guest clock: 1765348734.157447120
	I1210 06:38:54.195626   38273 fix.go:229] Guest: 2025-12-10 06:38:54.15744712 +0000 UTC Remote: 2025-12-10 06:38:54.079258373 +0000 UTC m=+14.735636026 (delta=78.188747ms)
	I1210 06:38:54.195651   38273 fix.go:200] guest clock delta is within tolerance: 78.188747ms
	I1210 06:38:54.195658   38273 start.go:83] releasing machines lock for "test-preload-741260", held for 14.746676207s
	I1210 06:38:54.198487   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:54.198840   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:54.198861   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:54.199344   38273 ssh_runner.go:195] Run: cat /version.json
	I1210 06:38:54.199408   38273 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:38:54.202130   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:54.202567   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:54.202603   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:54.202615   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:54.202784   38273 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/test-preload-741260/id_rsa Username:docker}
	I1210 06:38:54.203228   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:54.203263   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:54.203462   38273 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/test-preload-741260/id_rsa Username:docker}
	I1210 06:38:54.283753   38273 ssh_runner.go:195] Run: systemctl --version
	I1210 06:38:54.317963   38273 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:38:54.458807   38273 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:38:54.466500   38273 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:38:54.466577   38273 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:38:54.485716   38273 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:38:54.485740   38273 start.go:496] detecting cgroup driver to use...
	I1210 06:38:54.485802   38273 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:38:54.506089   38273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:38:54.523845   38273 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:38:54.523934   38273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:38:54.542049   38273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:38:54.558311   38273 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:38:54.705858   38273 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:38:54.917906   38273 docker.go:234] disabling docker service ...
	I1210 06:38:54.917991   38273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:38:54.935487   38273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:38:54.950771   38273 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:38:55.107564   38273 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:38:55.249687   38273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:38:55.265257   38273 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:38:55.286848   38273 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:38:55.286918   38273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:38:55.303447   38273 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:38:55.303508   38273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:38:55.315931   38273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:38:55.328822   38273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:38:55.341518   38273 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:38:55.355001   38273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:38:55.367436   38273 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:38:55.387294   38273 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:38:55.399582   38273 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:38:55.410318   38273 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 06:38:55.410407   38273 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 06:38:55.430325   38273 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:38:55.442089   38273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:55.580840   38273 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:38:55.686329   38273 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:38:55.686414   38273 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:38:55.691664   38273 start.go:564] Will wait 60s for crictl version
	I1210 06:38:55.691720   38273 ssh_runner.go:195] Run: which crictl
	I1210 06:38:55.695787   38273 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 06:38:55.731535   38273 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 06:38:55.731606   38273 ssh_runner.go:195] Run: crio --version
	I1210 06:38:55.760436   38273 ssh_runner.go:195] Run: crio --version
	I1210 06:38:55.791426   38273 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1210 06:38:55.795135   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:55.795540   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:38:55.795561   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:38:55.795852   38273 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 06:38:55.800345   38273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:38:55.816287   38273 kubeadm.go:884] updating cluster {Name:test-preload-741260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-741260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:38:55.816503   38273 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:38:55.816569   38273 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:55.851699   38273 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1210 06:38:55.851788   38273 ssh_runner.go:195] Run: which lz4
	I1210 06:38:55.856027   38273 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 06:38:55.860760   38273 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 06:38:55.860797   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1210 06:38:57.086715   38273 crio.go:462] duration metric: took 1.230725165s to copy over tarball
	I1210 06:38:57.086920   38273 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 06:38:58.565542   38273 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.478582587s)
	I1210 06:38:58.565572   38273 crio.go:469] duration metric: took 1.478816762s to extract the tarball
	I1210 06:38:58.565581   38273 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 06:38:58.603260   38273 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:58.645681   38273 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:38:58.645704   38273 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:38:58.645711   38273 kubeadm.go:935] updating node { 192.168.39.150 8443 v1.34.2 crio true true} ...
	I1210 06:38:58.645849   38273 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-741260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-741260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:38:58.645921   38273 ssh_runner.go:195] Run: crio config
	I1210 06:38:58.694882   38273 cni.go:84] Creating CNI manager for ""
	I1210 06:38:58.694912   38273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:38:58.694934   38273 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:38:58.694962   38273 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-741260 NodeName:test-preload-741260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:38:58.695122   38273 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-741260"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.150"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:38:58.695194   38273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 06:38:58.707560   38273 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:38:58.707640   38273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:38:58.719288   38273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1210 06:38:58.739172   38273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:38:58.759012   38273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1210 06:38:58.778720   38273 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I1210 06:38:58.782803   38273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:38:58.796501   38273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:58.933886   38273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:38:58.958284   38273 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260 for IP: 192.168.39.150
	I1210 06:38:58.958309   38273 certs.go:195] generating shared ca certs ...
	I1210 06:38:58.958327   38273 certs.go:227] acquiring lock for ca certs: {Name:mkbf1082c8328cc7c1360f5f8b344958e8aa5792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:38:58.958546   38273 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key
	I1210 06:38:58.958622   38273 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key
	I1210 06:38:58.958637   38273 certs.go:257] generating profile certs ...
	I1210 06:38:58.958740   38273 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/client.key
	I1210 06:38:58.958869   38273 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/apiserver.key.4e522249
	I1210 06:38:58.958954   38273 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/proxy-client.key
	I1210 06:38:58.959113   38273 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588.pem (1338 bytes)
	W1210 06:38:58.959167   38273 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588_empty.pem, impossibly tiny 0 bytes
	I1210 06:38:58.959183   38273 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:38:58.959225   38273 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:38:58.959265   38273 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:38:58.959300   38273 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem (1675 bytes)
	I1210 06:38:58.959395   38273 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem (1708 bytes)
	I1210 06:38:58.960323   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:38:58.995119   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:38:59.033098   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:38:59.063666   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:38:59.092970   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 06:38:59.121484   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:38:59.149278   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:38:59.178377   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:38:59.206328   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:38:59.239914   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588.pem --> /usr/share/ca-certificates/12588.pem (1338 bytes)
	I1210 06:38:59.268068   38273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem --> /usr/share/ca-certificates/125882.pem (1708 bytes)
	I1210 06:38:59.296327   38273 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:38:59.316083   38273 ssh_runner.go:195] Run: openssl version
	I1210 06:38:59.322581   38273 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/125882.pem
	I1210 06:38:59.333855   38273 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/125882.pem /etc/ssl/certs/125882.pem
	I1210 06:38:59.345501   38273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125882.pem
	I1210 06:38:59.350897   38273 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:56 /usr/share/ca-certificates/125882.pem
	I1210 06:38:59.350970   38273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125882.pem
	I1210 06:38:59.358629   38273 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:38:59.370213   38273 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/125882.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:38:59.381935   38273 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:59.393292   38273 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:38:59.405339   38273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:59.410796   38273 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:59.410864   38273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:59.417930   38273 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:38:59.429780   38273 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:38:59.441034   38273 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12588.pem
	I1210 06:38:59.452842   38273 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12588.pem /etc/ssl/certs/12588.pem
	I1210 06:38:59.464339   38273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12588.pem
	I1210 06:38:59.469668   38273 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:56 /usr/share/ca-certificates/12588.pem
	I1210 06:38:59.469720   38273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12588.pem
	I1210 06:38:59.476621   38273 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:38:59.487324   38273 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12588.pem /etc/ssl/certs/51391683.0
	I1210 06:38:59.498637   38273 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:38:59.503759   38273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:38:59.511249   38273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:38:59.518580   38273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:38:59.526132   38273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:38:59.533247   38273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:38:59.540409   38273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:38:59.547510   38273 kubeadm.go:401] StartCluster: {Name:test-preload-741260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-741260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:59.547594   38273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:38:59.547658   38273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:59.581167   38273 cri.go:89] found id: ""
	I1210 06:38:59.581235   38273 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:38:59.593807   38273 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:38:59.593826   38273 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:38:59.593875   38273 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:38:59.606232   38273 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:59.606868   38273 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-741260" does not appear in /home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:38:59.607072   38273 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-8667/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-741260" cluster setting kubeconfig missing "test-preload-741260" context setting]
	I1210 06:38:59.607551   38273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/kubeconfig: {Name:mke7eeebab9139e29de7a6356b74da28e2a42365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:38:59.608350   38273 kapi.go:59] client config for test-preload-741260: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/client.key", CAFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:38:59.609002   38273 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:38:59.609026   38273 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:38:59.609032   38273 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:38:59.609038   38273 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:38:59.609047   38273 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:38:59.609478   38273 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:38:59.622103   38273 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.150
	I1210 06:38:59.622147   38273 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:38:59.622159   38273 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 06:38:59.622216   38273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:59.658445   38273 cri.go:89] found id: ""
	I1210 06:38:59.658511   38273 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:38:59.683798   38273 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:38:59.695851   38273 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:38:59.695875   38273 kubeadm.go:158] found existing configuration files:
	
	I1210 06:38:59.695921   38273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:38:59.706931   38273 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:38:59.706990   38273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:38:59.718842   38273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:38:59.730034   38273 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:38:59.730093   38273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:38:59.742504   38273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:38:59.753817   38273 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:38:59.753892   38273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:38:59.765461   38273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:38:59.776224   38273 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:38:59.776296   38273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:38:59.788032   38273 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:38:59.799633   38273 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:59.855274   38273 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:39:01.460483   38273 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6051693s)
	I1210 06:39:01.460564   38273 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:39:01.726334   38273 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:39:01.784465   38273 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:39:01.866467   38273 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:39:01.866555   38273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.366933   38273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.867535   38273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.367501   38273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.866963   38273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.909398   38273 api_server.go:72] duration metric: took 2.042940437s to wait for apiserver process to appear ...
	I1210 06:39:03.909424   38273 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:39:03.909443   38273 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1210 06:39:03.909997   38273 api_server.go:269] stopped: https://192.168.39.150:8443/healthz: Get "https://192.168.39.150:8443/healthz": dial tcp 192.168.39.150:8443: connect: connection refused
	I1210 06:39:04.409918   38273 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1210 06:39:06.356342   38273 api_server.go:279] https://192.168.39.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:39:06.356386   38273 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:39:06.356405   38273 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1210 06:39:06.388833   38273 api_server.go:279] https://192.168.39.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:39:06.388861   38273 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:39:06.410205   38273 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1210 06:39:06.470313   38273 api_server.go:279] https://192.168.39.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:39:06.470344   38273 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:39:06.909948   38273 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1210 06:39:06.918189   38273 api_server.go:279] https://192.168.39.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:39:06.918211   38273 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:39:07.409812   38273 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1210 06:39:07.420691   38273 api_server.go:279] https://192.168.39.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:39:07.420722   38273 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:39:07.910414   38273 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1210 06:39:07.915319   38273 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I1210 06:39:07.921841   38273 api_server.go:141] control plane version: v1.34.2
	I1210 06:39:07.921866   38273 api_server.go:131] duration metric: took 4.012435833s to wait for apiserver health ...
	I1210 06:39:07.921875   38273 cni.go:84] Creating CNI manager for ""
	I1210 06:39:07.921883   38273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:39:07.923840   38273 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 06:39:07.925162   38273 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 06:39:07.950961   38273 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 06:39:07.972696   38273 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:39:07.979728   38273 system_pods.go:59] 7 kube-system pods found
	I1210 06:39:07.979781   38273 system_pods.go:61] "coredns-66bc5c9577-r2pmj" [346a73eb-6fd4-4bda-b21f-d46c5b3bb639] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:39:07.979790   38273 system_pods.go:61] "etcd-test-preload-741260" [8925b41a-805a-4540-b312-3ae29f0ecb08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:39:07.979798   38273 system_pods.go:61] "kube-apiserver-test-preload-741260" [9badb192-4b91-47df-be2d-5e9d13d69d38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:39:07.979804   38273 system_pods.go:61] "kube-controller-manager-test-preload-741260" [f782173f-8328-4533-9ae8-0d1d29b5cf99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:39:07.979809   38273 system_pods.go:61] "kube-proxy-svtd7" [b98bb63d-751d-4447-b72d-e1ab8f2901a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:39:07.979814   38273 system_pods.go:61] "kube-scheduler-test-preload-741260" [b6181af7-2f6a-4554-a6cd-ed047bfdd0cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:39:07.979819   38273 system_pods.go:61] "storage-provisioner" [ca775f95-bc38-46b1-a1b8-7b59ed159943] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:39:07.979827   38273 system_pods.go:74] duration metric: took 7.109931ms to wait for pod list to return data ...
	I1210 06:39:07.979836   38273 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:39:07.984776   38273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 06:39:07.984805   38273 node_conditions.go:123] node cpu capacity is 2
	I1210 06:39:07.984822   38273 node_conditions.go:105] duration metric: took 4.981468ms to run NodePressure ...
	I1210 06:39:07.984882   38273 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:39:08.273833   38273 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1210 06:39:08.276932   38273 kubeadm.go:744] kubelet initialised
	I1210 06:39:08.276954   38273 kubeadm.go:745] duration metric: took 3.08986ms waiting for restarted kubelet to initialise ...
	I1210 06:39:08.276968   38273 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:39:08.292537   38273 ops.go:34] apiserver oom_adj: -16
	I1210 06:39:08.292562   38273 kubeadm.go:602] duration metric: took 8.698729401s to restartPrimaryControlPlane
	I1210 06:39:08.292576   38273 kubeadm.go:403] duration metric: took 8.745069316s to StartCluster
	I1210 06:39:08.292595   38273 settings.go:142] acquiring lock: {Name:mk3d395dc9d24e60f90f67efa719ff71be48daf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:39:08.292683   38273 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:39:08.293182   38273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/kubeconfig: {Name:mke7eeebab9139e29de7a6356b74da28e2a42365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:39:08.293439   38273 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:39:08.293510   38273 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:39:08.293604   38273 addons.go:70] Setting storage-provisioner=true in profile "test-preload-741260"
	I1210 06:39:08.293622   38273 addons.go:239] Setting addon storage-provisioner=true in "test-preload-741260"
	W1210 06:39:08.293630   38273 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:39:08.293655   38273 host.go:66] Checking if "test-preload-741260" exists ...
	I1210 06:39:08.293661   38273 addons.go:70] Setting default-storageclass=true in profile "test-preload-741260"
	I1210 06:39:08.293695   38273 config.go:182] Loaded profile config "test-preload-741260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:39:08.293708   38273 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-741260"
	I1210 06:39:08.295988   38273 kapi.go:59] client config for test-preload-741260: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/client.key", CAFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:39:08.296258   38273 addons.go:239] Setting addon default-storageclass=true in "test-preload-741260"
	W1210 06:39:08.296274   38273 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:39:08.296299   38273 host.go:66] Checking if "test-preload-741260" exists ...
	I1210 06:39:08.296600   38273 out.go:179] * Verifying Kubernetes components...
	I1210 06:39:08.297446   38273 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:39:08.297779   38273 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:39:08.297838   38273 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:39:08.298334   38273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:39:08.298983   38273 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:39:08.298999   38273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:39:08.300604   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:39:08.301019   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:39:08.301054   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:39:08.301199   38273 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/test-preload-741260/id_rsa Username:docker}
	I1210 06:39:08.301636   38273 main.go:143] libmachine: domain test-preload-741260 has defined MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:39:08.301959   38273 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:07:d9", ip: ""} in network mk-test-preload-741260: {Iface:virbr1 ExpiryTime:2025-12-10 07:38:50 +0000 UTC Type:0 Mac:52:54:00:6c:07:d9 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-741260 Clientid:01:52:54:00:6c:07:d9}
	I1210 06:39:08.301980   38273 main.go:143] libmachine: domain test-preload-741260 has defined IP address 192.168.39.150 and MAC address 52:54:00:6c:07:d9 in network mk-test-preload-741260
	I1210 06:39:08.302145   38273 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/test-preload-741260/id_rsa Username:docker}
	I1210 06:39:08.496166   38273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:39:08.524040   38273 node_ready.go:35] waiting up to 6m0s for node "test-preload-741260" to be "Ready" ...
	I1210 06:39:08.679149   38273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:39:08.680371   38273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:39:09.362330   38273 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1210 06:39:09.363432   38273 addons.go:530] duration metric: took 1.069926951s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1210 06:39:10.528238   38273 node_ready.go:57] node "test-preload-741260" has "Ready":"False" status (will retry)
	W1210 06:39:13.028221   38273 node_ready.go:57] node "test-preload-741260" has "Ready":"False" status (will retry)
	W1210 06:39:15.529009   38273 node_ready.go:57] node "test-preload-741260" has "Ready":"False" status (will retry)
	I1210 06:39:17.027815   38273 node_ready.go:49] node "test-preload-741260" is "Ready"
	I1210 06:39:17.027840   38273 node_ready.go:38] duration metric: took 8.503735636s for node "test-preload-741260" to be "Ready" ...
	I1210 06:39:17.027855   38273 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:39:17.027905   38273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:17.047735   38273 api_server.go:72] duration metric: took 8.75426268s to wait for apiserver process to appear ...
	I1210 06:39:17.047761   38273 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:39:17.047778   38273 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1210 06:39:17.053899   38273 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I1210 06:39:17.055124   38273 api_server.go:141] control plane version: v1.34.2
	I1210 06:39:17.055154   38273 api_server.go:131] duration metric: took 7.385803ms to wait for apiserver health ...
	I1210 06:39:17.055167   38273 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:39:17.058582   38273 system_pods.go:59] 7 kube-system pods found
	I1210 06:39:17.058614   38273 system_pods.go:61] "coredns-66bc5c9577-r2pmj" [346a73eb-6fd4-4bda-b21f-d46c5b3bb639] Running
	I1210 06:39:17.058625   38273 system_pods.go:61] "etcd-test-preload-741260" [8925b41a-805a-4540-b312-3ae29f0ecb08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:39:17.058634   38273 system_pods.go:61] "kube-apiserver-test-preload-741260" [9badb192-4b91-47df-be2d-5e9d13d69d38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:39:17.058644   38273 system_pods.go:61] "kube-controller-manager-test-preload-741260" [f782173f-8328-4533-9ae8-0d1d29b5cf99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:39:17.058652   38273 system_pods.go:61] "kube-proxy-svtd7" [b98bb63d-751d-4447-b72d-e1ab8f2901a3] Running
	I1210 06:39:17.058660   38273 system_pods.go:61] "kube-scheduler-test-preload-741260" [b6181af7-2f6a-4554-a6cd-ed047bfdd0cd] Running
	I1210 06:39:17.058667   38273 system_pods.go:61] "storage-provisioner" [ca775f95-bc38-46b1-a1b8-7b59ed159943] Running
	I1210 06:39:17.058681   38273 system_pods.go:74] duration metric: took 3.504105ms to wait for pod list to return data ...
	I1210 06:39:17.058696   38273 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:39:17.061198   38273 default_sa.go:45] found service account: "default"
	I1210 06:39:17.061220   38273 default_sa.go:55] duration metric: took 2.515394ms for default service account to be created ...
	I1210 06:39:17.061229   38273 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:39:17.063974   38273 system_pods.go:86] 7 kube-system pods found
	I1210 06:39:17.063997   38273 system_pods.go:89] "coredns-66bc5c9577-r2pmj" [346a73eb-6fd4-4bda-b21f-d46c5b3bb639] Running
	I1210 06:39:17.064006   38273 system_pods.go:89] "etcd-test-preload-741260" [8925b41a-805a-4540-b312-3ae29f0ecb08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:39:17.064014   38273 system_pods.go:89] "kube-apiserver-test-preload-741260" [9badb192-4b91-47df-be2d-5e9d13d69d38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:39:17.064020   38273 system_pods.go:89] "kube-controller-manager-test-preload-741260" [f782173f-8328-4533-9ae8-0d1d29b5cf99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:39:17.064025   38273 system_pods.go:89] "kube-proxy-svtd7" [b98bb63d-751d-4447-b72d-e1ab8f2901a3] Running
	I1210 06:39:17.064030   38273 system_pods.go:89] "kube-scheduler-test-preload-741260" [b6181af7-2f6a-4554-a6cd-ed047bfdd0cd] Running
	I1210 06:39:17.064033   38273 system_pods.go:89] "storage-provisioner" [ca775f95-bc38-46b1-a1b8-7b59ed159943] Running
	I1210 06:39:17.064038   38273 system_pods.go:126] duration metric: took 2.805135ms to wait for k8s-apps to be running ...
	I1210 06:39:17.064048   38273 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:39:17.064090   38273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:39:17.079425   38273 system_svc.go:56] duration metric: took 15.369793ms WaitForService to wait for kubelet
	I1210 06:39:17.079454   38273 kubeadm.go:587] duration metric: took 8.785988337s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:39:17.079476   38273 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:39:17.082135   38273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 06:39:17.082159   38273 node_conditions.go:123] node cpu capacity is 2
	I1210 06:39:17.082174   38273 node_conditions.go:105] duration metric: took 2.689894ms to run NodePressure ...
	I1210 06:39:17.082189   38273 start.go:242] waiting for startup goroutines ...
	I1210 06:39:17.082200   38273 start.go:247] waiting for cluster config update ...
	I1210 06:39:17.082214   38273 start.go:256] writing updated cluster config ...
	I1210 06:39:17.082598   38273 ssh_runner.go:195] Run: rm -f paused
	I1210 06:39:17.087231   38273 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:39:17.087677   38273 kapi.go:59] client config for test-preload-741260: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/profiles/test-preload-741260/client.key", CAFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:39:17.091049   38273 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r2pmj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:17.096241   38273 pod_ready.go:94] pod "coredns-66bc5c9577-r2pmj" is "Ready"
	I1210 06:39:17.096274   38273 pod_ready.go:86] duration metric: took 5.206871ms for pod "coredns-66bc5c9577-r2pmj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:17.098704   38273 pod_ready.go:83] waiting for pod "etcd-test-preload-741260" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:39:19.108194   38273 pod_ready.go:104] pod "etcd-test-preload-741260" is not "Ready", error: <nil>
	I1210 06:39:20.104783   38273 pod_ready.go:94] pod "etcd-test-preload-741260" is "Ready"
	I1210 06:39:20.104814   38273 pod_ready.go:86] duration metric: took 3.006082116s for pod "etcd-test-preload-741260" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:20.106888   38273 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-741260" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:20.110287   38273 pod_ready.go:94] pod "kube-apiserver-test-preload-741260" is "Ready"
	I1210 06:39:20.110313   38273 pod_ready.go:86] duration metric: took 3.400611ms for pod "kube-apiserver-test-preload-741260" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:20.114946   38273 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-741260" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:22.120710   38273 pod_ready.go:94] pod "kube-controller-manager-test-preload-741260" is "Ready"
	I1210 06:39:22.120747   38273 pod_ready.go:86] duration metric: took 2.005780164s for pod "kube-controller-manager-test-preload-741260" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:22.123180   38273 pod_ready.go:83] waiting for pod "kube-proxy-svtd7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:22.128081   38273 pod_ready.go:94] pod "kube-proxy-svtd7" is "Ready"
	I1210 06:39:22.128107   38273 pod_ready.go:86] duration metric: took 4.905154ms for pod "kube-proxy-svtd7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:22.291481   38273 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-741260" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:22.692059   38273 pod_ready.go:94] pod "kube-scheduler-test-preload-741260" is "Ready"
	I1210 06:39:22.692086   38273 pod_ready.go:86] duration metric: took 400.573098ms for pod "kube-scheduler-test-preload-741260" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:39:22.692104   38273 pod_ready.go:40] duration metric: took 5.60484866s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:39:22.736955   38273 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:39:22.738982   38273 out.go:179] * Done! kubectl is now configured to use "test-preload-741260" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.497343109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348763497321589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76cae29c-06a3-4fad-8d9a-1e8a62b7c147 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.498315007Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94b39e95-8552-4265-9357-b798a8e0fc9a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.498481550Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94b39e95-8552-4265-9357-b798a8e0fc9a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.498721651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49c3cb874a36fd451357119f9cf1cc93032625419b5b68bf05182ef78e340ecf,PodSandboxId:c24ea2f2652bdad2c81e70b1bc2904a8ada55c0fca8eb33d45a59a41e3272605,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348754886546124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r2pmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346a73eb-6fd4-4bda-b21f-d46c5b3bb639,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bb0d66d8d13c0548ba2ff92419ef18d88e6a3a1dee7668159f6a419bd7bfce,PodSandboxId:11043fba29e3dcc3d7d1be5b717f36c7a7576d16895b05b92a7e5c35c19fcb8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348747237213278,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-svtd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98bb63d-751d-4447-b72d-e1ab8f2901a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba17d67255a33ef8b0ce9bf262ed99f54b21c5fd3db3ef59937e093e9b73352f,PodSandboxId:853ae2c4ab3fd5f5a337a9c57f5b3f174fcf2ec7e589c4b938edd8b3b8b07d9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765348747244662342,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca775f95-bc38-46b1-a1b8-7b59ed159943,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555ee3f8ab3a53917f1169e9b753b158bf255fc269f77726e3d376c8029b980a,PodSandboxId:d219d57720ce135e1a50882dabb92af33af7433c2a0b7f7dfdc09275b0f462c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348743677923131,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7203e285d493f9276ce084849b6c5e2e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67d1afa857de63549159c3db1f23a5bd2e94311648c61a29bfc7e8e84fffcad5,PodSandboxId:b22d65fff91691ba127389f410719522625f6d7584d58390f378c0ea4e1e043a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e
3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348743647002987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb836b0cf86efc6ea5d3b3d191ef88e4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3542cbfb95e48e8d8c463cdd6d8a3f16efe22376b7b4741d1066b80fd78bf9d,PodSandboxId:ac0341b914b16b49bac035ef9f4cca2c004c59f27e4ad4d8912604218e7b2947,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348743645470062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99dde823fecf044f6ee594c3fe359432,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297951826ff7969c7225a418cdd568404dfb4a9a65737347bb4e605a2640fc6a,PodSandboxId:d5c378f687385789db0e2d55a288d04c5cf9303fe2398ea7c2bea177c9015b8e,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348743630480502,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7203b24dd7bc11cd4a5b501877a505eb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94b39e95-8552-4265-9357-b798a8e0fc9a name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.531778318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10d9a7ac-1674-40da-85e1-52ff20f155c2 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.531861605Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10d9a7ac-1674-40da-85e1-52ff20f155c2 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.533386539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=470f9db3-53e1-46b8-a82f-1e912648f289 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.534103745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348763534078640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=470f9db3-53e1-46b8-a82f-1e912648f289 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.534973842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0d05b54-a925-46ae-8107-ec0e0c8b209a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.535022892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0d05b54-a925-46ae-8107-ec0e0c8b209a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.535166653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49c3cb874a36fd451357119f9cf1cc93032625419b5b68bf05182ef78e340ecf,PodSandboxId:c24ea2f2652bdad2c81e70b1bc2904a8ada55c0fca8eb33d45a59a41e3272605,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348754886546124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r2pmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346a73eb-6fd4-4bda-b21f-d46c5b3bb639,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bb0d66d8d13c0548ba2ff92419ef18d88e6a3a1dee7668159f6a419bd7bfce,PodSandboxId:11043fba29e3dcc3d7d1be5b717f36c7a7576d16895b05b92a7e5c35c19fcb8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348747237213278,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-svtd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98bb63d-751d-4447-b72d-e1ab8f2901a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba17d67255a33ef8b0ce9bf262ed99f54b21c5fd3db3ef59937e093e9b73352f,PodSandboxId:853ae2c4ab3fd5f5a337a9c57f5b3f174fcf2ec7e589c4b938edd8b3b8b07d9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765348747244662342,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca775f95-bc38-46b1-a1b8-7b59ed159943,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555ee3f8ab3a53917f1169e9b753b158bf255fc269f77726e3d376c8029b980a,PodSandboxId:d219d57720ce135e1a50882dabb92af33af7433c2a0b7f7dfdc09275b0f462c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348743677923131,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7203e285d493f9276ce084849b6c5e2e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67d1afa857de63549159c3db1f23a5bd2e94311648c61a29bfc7e8e84fffcad5,PodSandboxId:b22d65fff91691ba127389f410719522625f6d7584d58390f378c0ea4e1e043a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e
3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348743647002987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb836b0cf86efc6ea5d3b3d191ef88e4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3542cbfb95e48e8d8c463cdd6d8a3f16efe22376b7b4741d1066b80fd78bf9d,PodSandboxId:ac0341b914b16b49bac035ef9f4cca2c004c59f27e4ad4d8912604218e7b2947,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348743645470062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99dde823fecf044f6ee594c3fe359432,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297951826ff7969c7225a418cdd568404dfb4a9a65737347bb4e605a2640fc6a,PodSandboxId:d5c378f687385789db0e2d55a288d04c5cf9303fe2398ea7c2bea177c9015b8e,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348743630480502,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7203b24dd7bc11cd4a5b501877a505eb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0d05b54-a925-46ae-8107-ec0e0c8b209a name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.568545735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=449dfb71-c819-465c-a14f-a551dcdb1bd6 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.568777391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=449dfb71-c819-465c-a14f-a551dcdb1bd6 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.570228282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66bf9b8d-a068-4be0-b3bd-f2bcd1b7e4a1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.570599029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348763570579364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66bf9b8d-a068-4be0-b3bd-f2bcd1b7e4a1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.571807613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40e334f3-a356-4f0d-b49c-ffeafd20f280 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.571877344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40e334f3-a356-4f0d-b49c-ffeafd20f280 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.572039438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49c3cb874a36fd451357119f9cf1cc93032625419b5b68bf05182ef78e340ecf,PodSandboxId:c24ea2f2652bdad2c81e70b1bc2904a8ada55c0fca8eb33d45a59a41e3272605,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348754886546124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r2pmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346a73eb-6fd4-4bda-b21f-d46c5b3bb639,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bb0d66d8d13c0548ba2ff92419ef18d88e6a3a1dee7668159f6a419bd7bfce,PodSandboxId:11043fba29e3dcc3d7d1be5b717f36c7a7576d16895b05b92a7e5c35c19fcb8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348747237213278,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-svtd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98bb63d-751d-4447-b72d-e1ab8f2901a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba17d67255a33ef8b0ce9bf262ed99f54b21c5fd3db3ef59937e093e9b73352f,PodSandboxId:853ae2c4ab3fd5f5a337a9c57f5b3f174fcf2ec7e589c4b938edd8b3b8b07d9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765348747244662342,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca775f95-bc38-46b1-a1b8-7b59ed159943,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555ee3f8ab3a53917f1169e9b753b158bf255fc269f77726e3d376c8029b980a,PodSandboxId:d219d57720ce135e1a50882dabb92af33af7433c2a0b7f7dfdc09275b0f462c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348743677923131,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7203e285d493f9276ce084849b6c5e2e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67d1afa857de63549159c3db1f23a5bd2e94311648c61a29bfc7e8e84fffcad5,PodSandboxId:b22d65fff91691ba127389f410719522625f6d7584d58390f378c0ea4e1e043a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e
3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348743647002987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb836b0cf86efc6ea5d3b3d191ef88e4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3542cbfb95e48e8d8c463cdd6d8a3f16efe22376b7b4741d1066b80fd78bf9d,PodSandboxId:ac0341b914b16b49bac035ef9f4cca2c004c59f27e4ad4d8912604218e7b2947,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348743645470062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99dde823fecf044f6ee594c3fe359432,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297951826ff7969c7225a418cdd568404dfb4a9a65737347bb4e605a2640fc6a,PodSandboxId:d5c378f687385789db0e2d55a288d04c5cf9303fe2398ea7c2bea177c9015b8e,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348743630480502,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7203b24dd7bc11cd4a5b501877a505eb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40e334f3-a356-4f0d-b49c-ffeafd20f280 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.601628783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff7bef74-3210-4bbe-8b77-3a54565eaab2 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.601748210Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff7bef74-3210-4bbe-8b77-3a54565eaab2 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.602937836Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81abd0dc-19a7-4aa3-a7a2-0d0fd97d9214 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.603336707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348763603309810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81abd0dc-19a7-4aa3-a7a2-0d0fd97d9214 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.603989205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1de89c7e-3477-404d-af7a-6fdd972b8472 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.604176703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1de89c7e-3477-404d-af7a-6fdd972b8472 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:39:23 test-preload-741260 crio[837]: time="2025-12-10 06:39:23.604431841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49c3cb874a36fd451357119f9cf1cc93032625419b5b68bf05182ef78e340ecf,PodSandboxId:c24ea2f2652bdad2c81e70b1bc2904a8ada55c0fca8eb33d45a59a41e3272605,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348754886546124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r2pmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346a73eb-6fd4-4bda-b21f-d46c5b3bb639,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bb0d66d8d13c0548ba2ff92419ef18d88e6a3a1dee7668159f6a419bd7bfce,PodSandboxId:11043fba29e3dcc3d7d1be5b717f36c7a7576d16895b05b92a7e5c35c19fcb8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348747237213278,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-svtd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b98bb63d-751d-4447-b72d-e1ab8f2901a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba17d67255a33ef8b0ce9bf262ed99f54b21c5fd3db3ef59937e093e9b73352f,PodSandboxId:853ae2c4ab3fd5f5a337a9c57f5b3f174fcf2ec7e589c4b938edd8b3b8b07d9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765348747244662342,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca775f95-bc38-46b1-a1b8-7b59ed159943,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555ee3f8ab3a53917f1169e9b753b158bf255fc269f77726e3d376c8029b980a,PodSandboxId:d219d57720ce135e1a50882dabb92af33af7433c2a0b7f7dfdc09275b0f462c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348743677923131,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7203e285d493f9276ce084849b6c5e2e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67d1afa857de63549159c3db1f23a5bd2e94311648c61a29bfc7e8e84fffcad5,PodSandboxId:b22d65fff91691ba127389f410719522625f6d7584d58390f378c0ea4e1e043a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e
3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348743647002987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb836b0cf86efc6ea5d3b3d191ef88e4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3542cbfb95e48e8d8c463cdd6d8a3f16efe22376b7b4741d1066b80fd78bf9d,PodSandboxId:ac0341b914b16b49bac035ef9f4cca2c004c59f27e4ad4d8912604218e7b2947,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348743645470062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99dde823fecf044f6ee594c3fe359432,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297951826ff7969c7225a418cdd568404dfb4a9a65737347bb4e605a2640fc6a,PodSandboxId:d5c378f687385789db0e2d55a288d04c5cf9303fe2398ea7c2bea177c9015b8e,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348743630480502,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-741260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7203b24dd7bc11cd4a5b501877a505eb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1de89c7e-3477-404d-af7a-6fdd972b8472 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	49c3cb874a36f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   8 seconds ago       Running             coredns                   1                   c24ea2f2652bd       coredns-66bc5c9577-r2pmj                      kube-system
	ba17d67255a33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   853ae2c4ab3fd       storage-provisioner                           kube-system
	26bb0d66d8d13       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   16 seconds ago      Running             kube-proxy                1                   11043fba29e3d       kube-proxy-svtd7                              kube-system
	555ee3f8ab3a5       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   19 seconds ago      Running             kube-scheduler            1                   d219d57720ce1       kube-scheduler-test-preload-741260            kube-system
	67d1afa857de6       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago      Running             kube-controller-manager   1                   b22d65fff9169       kube-controller-manager-test-preload-741260   kube-system
	b3542cbfb95e4       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago      Running             kube-apiserver            1                   ac0341b914b16       kube-apiserver-test-preload-741260            kube-system
	297951826ff79       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago      Running             etcd                      1                   d5c378f687385       etcd-test-preload-741260                      kube-system
	
	
	==> coredns [49c3cb874a36fd451357119f9cf1cc93032625419b5b68bf05182ef78e340ecf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36505 - 28090 "HINFO IN 8789955792677218699.7525266478352543101. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066876448s
	
	
	==> describe nodes <==
	Name:               test-preload-741260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-741260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=test-preload-741260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_38_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:38:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-741260
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:39:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:39:16 +0000   Wed, 10 Dec 2025 06:38:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:39:16 +0000   Wed, 10 Dec 2025 06:38:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:39:16 +0000   Wed, 10 Dec 2025 06:38:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:39:16 +0000   Wed, 10 Dec 2025 06:39:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    test-preload-741260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05c5f1b7f14b0ba230aa6124beb5e6
	  System UUID:                6a05c5f1-b7f1-4b0b-a230-aa6124beb5e6
	  Boot ID:                    45dac63c-4917-49a1-a11c-317c04b3272c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-r2pmj                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     64s
	  kube-system                 etcd-test-preload-741260                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         69s
	  kube-system                 kube-apiserver-test-preload-741260             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-controller-manager-test-preload-741260    200m (10%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-proxy-svtd7                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-scheduler-test-preload-741260             100m (5%)     0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 63s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   Starting                 76s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node test-preload-741260 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    75s (x7 over 75s)  kubelet          Node test-preload-741260 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node test-preload-741260 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    69s                kubelet          Node test-preload-741260 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  69s                kubelet          Node test-preload-741260 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     69s                kubelet          Node test-preload-741260 status is now: NodeHasSufficientPID
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Normal   NodeReady                68s                kubelet          Node test-preload-741260 status is now: NodeReady
	  Normal   RegisteredNode           65s                node-controller  Node test-preload-741260 event: Registered Node test-preload-741260 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-741260 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-741260 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-741260 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                kubelet          Node test-preload-741260 has been rebooted, boot id: 45dac63c-4917-49a1-a11c-317c04b3272c
	  Normal   RegisteredNode           14s                node-controller  Node test-preload-741260 event: Registered Node test-preload-741260 in Controller
	
	
	==> dmesg <==
	[Dec10 06:38] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001493] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.009505] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.951491] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.118729] kauditd_printk_skb: 60 callbacks suppressed
	[Dec10 06:39] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.490384] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.000044] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.024219] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [297951826ff7969c7225a418cdd568404dfb4a9a65737347bb4e605a2640fc6a] <==
	{"level":"warn","ts":"2025-12-10T06:39:05.533012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.544749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.552637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.561547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.569260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.579127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.585848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.594459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.602012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.617301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.622883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.632115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.640647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.647765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.656490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.666576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.672121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.683046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.688416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.711853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.731027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.742249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.749847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.756138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:39:05.799294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41616","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:39:23 up 0 min,  0 users,  load average: 0.46, 0.12, 0.04
	Linux test-preload-741260 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b3542cbfb95e48e8d8c463cdd6d8a3f16efe22376b7b4741d1066b80fd78bf9d] <==
	I1210 06:39:06.465837       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:39:06.473963       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 06:39:06.474941       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:39:06.476530       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:39:06.476577       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 06:39:06.476584       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 06:39:06.476708       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:39:06.487381       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 06:39:06.487567       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 06:39:06.491091       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 06:39:06.492941       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:39:06.493003       1 aggregator.go:171] initial CRD sync complete...
	I1210 06:39:06.493021       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:39:06.493036       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:39:06.493051       1 cache.go:39] Caches are synced for autoregister controller
	E1210 06:39:06.527334       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:39:06.909219       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:39:07.330997       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:39:08.105788       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:39:08.148265       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:39:08.178608       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:39:08.184988       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:39:09.811588       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:39:09.991597       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:39:10.238470       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [67d1afa857de63549159c3db1f23a5bd2e94311648c61a29bfc7e8e84fffcad5] <==
	I1210 06:39:09.809282       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:39:09.818471       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 06:39:09.820888       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 06:39:09.820981       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 06:39:09.822268       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 06:39:09.823538       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 06:39:09.824494       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:39:09.825542       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:39:09.825706       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:39:09.825780       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:39:09.829648       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 06:39:09.833229       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:39:09.835742       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 06:39:09.836955       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:39:09.836999       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 06:39:09.837725       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:39:09.836984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 06:39:09.837864       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:39:09.839818       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:39:09.839874       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 06:39:09.839878       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:39:09.847064       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:39:09.847105       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:39:09.847110       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:39:19.782791       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [26bb0d66d8d13c0548ba2ff92419ef18d88e6a3a1dee7668159f6a419bd7bfce] <==
	I1210 06:39:07.513467       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:39:07.615379       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:39:07.615483       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.150"]
	E1210 06:39:07.615584       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:39:07.703630       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 06:39:07.703763       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 06:39:07.703836       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:39:07.717558       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:39:07.719301       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:39:07.719328       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:39:07.731492       1 config.go:200] "Starting service config controller"
	I1210 06:39:07.731563       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:39:07.731596       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:39:07.731611       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:39:07.731643       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:39:07.731658       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:39:07.732328       1 config.go:309] "Starting node config controller"
	I1210 06:39:07.732370       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:39:07.732387       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:39:07.833046       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:39:07.833191       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:39:07.833207       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [555ee3f8ab3a53917f1169e9b753b158bf255fc269f77726e3d376c8029b980a] <==
	I1210 06:39:04.759042       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:39:06.369591       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:39:06.369626       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:39:06.369639       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:39:06.369646       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:39:06.425024       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 06:39:06.425065       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:39:06.445380       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:39:06.445730       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:39:06.445765       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:39:06.454814       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:39:06.555914       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: I1210 06:39:06.791266    1187 apiserver.go:52] "Watching apiserver"
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: E1210 06:39:06.795866    1187 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-r2pmj" podUID="346a73eb-6fd4-4bda-b21f-d46c5b3bb639"
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: I1210 06:39:06.814946    1187 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: E1210 06:39:06.869369    1187 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: I1210 06:39:06.897179    1187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ca775f95-bc38-46b1-a1b8-7b59ed159943-tmp\") pod \"storage-provisioner\" (UID: \"ca775f95-bc38-46b1-a1b8-7b59ed159943\") " pod="kube-system/storage-provisioner"
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: I1210 06:39:06.897250    1187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b98bb63d-751d-4447-b72d-e1ab8f2901a3-xtables-lock\") pod \"kube-proxy-svtd7\" (UID: \"b98bb63d-751d-4447-b72d-e1ab8f2901a3\") " pod="kube-system/kube-proxy-svtd7"
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: I1210 06:39:06.897266    1187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b98bb63d-751d-4447-b72d-e1ab8f2901a3-lib-modules\") pod \"kube-proxy-svtd7\" (UID: \"b98bb63d-751d-4447-b72d-e1ab8f2901a3\") " pod="kube-system/kube-proxy-svtd7"
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: E1210 06:39:06.897828    1187 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: E1210 06:39:06.897925    1187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/346a73eb-6fd4-4bda-b21f-d46c5b3bb639-config-volume podName:346a73eb-6fd4-4bda-b21f-d46c5b3bb639 nodeName:}" failed. No retries permitted until 2025-12-10 06:39:07.397904677 +0000 UTC m=+5.700858913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/346a73eb-6fd4-4bda-b21f-d46c5b3bb639-config-volume") pod "coredns-66bc5c9577-r2pmj" (UID: "346a73eb-6fd4-4bda-b21f-d46c5b3bb639") : object "kube-system"/"coredns" not registered
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: I1210 06:39:06.961624    1187 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-741260"
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: I1210 06:39:06.962395    1187 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-741260"
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: E1210 06:39:06.977735    1187 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-741260\" already exists" pod="kube-system/kube-scheduler-test-preload-741260"
	Dec 10 06:39:06 test-preload-741260 kubelet[1187]: E1210 06:39:06.979518    1187 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-741260\" already exists" pod="kube-system/etcd-test-preload-741260"
	Dec 10 06:39:07 test-preload-741260 kubelet[1187]: E1210 06:39:07.400735    1187 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 06:39:07 test-preload-741260 kubelet[1187]: E1210 06:39:07.401237    1187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/346a73eb-6fd4-4bda-b21f-d46c5b3bb639-config-volume podName:346a73eb-6fd4-4bda-b21f-d46c5b3bb639 nodeName:}" failed. No retries permitted until 2025-12-10 06:39:08.401219097 +0000 UTC m=+6.704173333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/346a73eb-6fd4-4bda-b21f-d46c5b3bb639-config-volume") pod "coredns-66bc5c9577-r2pmj" (UID: "346a73eb-6fd4-4bda-b21f-d46c5b3bb639") : object "kube-system"/"coredns" not registered
	Dec 10 06:39:08 test-preload-741260 kubelet[1187]: E1210 06:39:08.406578    1187 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 06:39:08 test-preload-741260 kubelet[1187]: E1210 06:39:08.406664    1187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/346a73eb-6fd4-4bda-b21f-d46c5b3bb639-config-volume podName:346a73eb-6fd4-4bda-b21f-d46c5b3bb639 nodeName:}" failed. No retries permitted until 2025-12-10 06:39:10.406646449 +0000 UTC m=+8.709600685 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/346a73eb-6fd4-4bda-b21f-d46c5b3bb639-config-volume") pod "coredns-66bc5c9577-r2pmj" (UID: "346a73eb-6fd4-4bda-b21f-d46c5b3bb639") : object "kube-system"/"coredns" not registered
	Dec 10 06:39:08 test-preload-741260 kubelet[1187]: E1210 06:39:08.859519    1187 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-r2pmj" podUID="346a73eb-6fd4-4bda-b21f-d46c5b3bb639"
	Dec 10 06:39:10 test-preload-741260 kubelet[1187]: E1210 06:39:10.421485    1187 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 06:39:10 test-preload-741260 kubelet[1187]: E1210 06:39:10.421578    1187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/346a73eb-6fd4-4bda-b21f-d46c5b3bb639-config-volume podName:346a73eb-6fd4-4bda-b21f-d46c5b3bb639 nodeName:}" failed. No retries permitted until 2025-12-10 06:39:14.421564166 +0000 UTC m=+12.724518402 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/346a73eb-6fd4-4bda-b21f-d46c5b3bb639-config-volume") pod "coredns-66bc5c9577-r2pmj" (UID: "346a73eb-6fd4-4bda-b21f-d46c5b3bb639") : object "kube-system"/"coredns" not registered
	Dec 10 06:39:10 test-preload-741260 kubelet[1187]: E1210 06:39:10.859201    1187 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-r2pmj" podUID="346a73eb-6fd4-4bda-b21f-d46c5b3bb639"
	Dec 10 06:39:11 test-preload-741260 kubelet[1187]: E1210 06:39:11.871854    1187 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765348751870572217 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 10 06:39:11 test-preload-741260 kubelet[1187]: E1210 06:39:11.871891    1187 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765348751870572217 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 10 06:39:21 test-preload-741260 kubelet[1187]: E1210 06:39:21.873349    1187 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765348761873092375 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 10 06:39:21 test-preload-741260 kubelet[1187]: E1210 06:39:21.873366    1187 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765348761873092375 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [ba17d67255a33ef8b0ce9bf262ed99f54b21c5fd3db3ef59937e093e9b73352f] <==
	I1210 06:39:07.349824       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-741260 -n test-preload-741260
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-741260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-741260" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-741260
--- FAIL: TestPreload (120.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (931.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921183 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-921183 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.443647774s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-921183
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-921183: (1.908239613s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-921183 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-921183 status --format={{.Host}}: exit status 7 (64.866287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921183 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-921183 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.422596522s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-921183 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921183 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-921183 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (86.652603ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-921183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-921183
	    minikube start -p kubernetes-upgrade-921183 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9211832 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-921183 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921183 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-921183 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 80 (13m52.495151554s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-921183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-921183" primary control-plane node in "kubernetes-upgrade-921183" cluster
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:45:27.246297   45161 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:45:27.246427   45161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:45:27.246440   45161 out.go:374] Setting ErrFile to fd 2...
	I1210 06:45:27.246445   45161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:45:27.246689   45161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:45:27.247168   45161 out.go:368] Setting JSON to false
	I1210 06:45:27.248043   45161 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5271,"bootTime":1765343856,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:45:27.248108   45161 start.go:143] virtualization: kvm guest
	I1210 06:45:27.250113   45161 out.go:179] * [kubernetes-upgrade-921183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:45:27.251638   45161 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:45:27.251658   45161 notify.go:221] Checking for updates...
	I1210 06:45:27.253829   45161 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:45:27.254993   45161 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:45:27.256320   45161 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 06:45:27.260899   45161 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:45:27.262091   45161 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:45:27.263800   45161 config.go:182] Loaded profile config "kubernetes-upgrade-921183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:45:27.264508   45161 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:45:27.305189   45161 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 06:45:27.306280   45161 start.go:309] selected driver: kvm2
	I1210 06:45:27.306294   45161 start.go:927] validating driver "kvm2" against &{Name:kubernetes-upgrade-921183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-921183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.121 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:45:27.306417   45161 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:45:27.307683   45161 cni.go:84] Creating CNI manager for ""
	I1210 06:45:27.307740   45161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:45:27.307774   45161 start.go:353] cluster config:
	{Name:kubernetes-upgrade-921183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-921183 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.121 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:45:27.307868   45161 iso.go:125] acquiring lock: {Name:mk873366a783b9f735599145b1cff21faf50318e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:45:27.309338   45161 out.go:179] * Starting "kubernetes-upgrade-921183" primary control-plane node in "kubernetes-upgrade-921183" cluster
	I1210 06:45:27.310519   45161 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:45:27.310554   45161 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:45:27.310564   45161 cache.go:65] Caching tarball of preloaded images
	I1210 06:45:27.310664   45161 preload.go:238] Found /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:45:27.310678   45161 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 06:45:27.310787   45161 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183/config.json ...
	I1210 06:45:27.311018   45161 start.go:360] acquireMachinesLock for kubernetes-upgrade-921183: {Name:mkc15d5369b31c34b8a5517a09471706fa3f291a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 06:45:27.410515   45161 start.go:364] duration metric: took 99.467683ms to acquireMachinesLock for "kubernetes-upgrade-921183"
	I1210 06:45:27.410593   45161 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:45:27.410602   45161 fix.go:54] fixHost starting: 
	I1210 06:45:27.413288   45161 fix.go:112] recreateIfNeeded on kubernetes-upgrade-921183: state=Running err=<nil>
	W1210 06:45:27.413324   45161 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:45:27.415650   45161 out.go:252] * Updating the running kvm2 "kubernetes-upgrade-921183" VM ...
	I1210 06:45:27.415685   45161 machine.go:94] provisionDockerMachine start ...
	I1210 06:45:27.419476   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.419919   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:27.419974   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.420214   45161 main.go:143] libmachine: Using SSH client type: native
	I1210 06:45:27.420530   45161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I1210 06:45:27.420548   45161 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:45:27.535489   45161 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-921183
	
	I1210 06:45:27.535516   45161 buildroot.go:166] provisioning hostname "kubernetes-upgrade-921183"
	I1210 06:45:27.538870   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.539375   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:27.539409   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.539612   45161 main.go:143] libmachine: Using SSH client type: native
	I1210 06:45:27.539869   45161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I1210 06:45:27.539881   45161 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-921183 && echo "kubernetes-upgrade-921183" | sudo tee /etc/hostname
	I1210 06:45:27.708812   45161 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-921183
	
	I1210 06:45:27.712222   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.712730   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:27.712787   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.713036   45161 main.go:143] libmachine: Using SSH client type: native
	I1210 06:45:27.713383   45161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I1210 06:45:27.713410   45161 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-921183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-921183/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-921183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:45:27.831640   45161 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:45:27.831701   45161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8667/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8667/.minikube}
	I1210 06:45:27.831728   45161 buildroot.go:174] setting up certificates
	I1210 06:45:27.831739   45161 provision.go:84] configureAuth start
	I1210 06:45:27.835579   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.836140   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:27.836187   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.839547   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.840039   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:27.840075   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.840278   45161 provision.go:143] copyHostCerts
	I1210 06:45:27.840343   45161 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem, removing ...
	I1210 06:45:27.840384   45161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem
	I1210 06:45:27.840469   45161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem (1082 bytes)
	I1210 06:45:27.840584   45161 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem, removing ...
	I1210 06:45:27.840597   45161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem
	I1210 06:45:27.840635   45161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem (1123 bytes)
	I1210 06:45:27.840737   45161 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem, removing ...
	I1210 06:45:27.840759   45161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem
	I1210 06:45:27.840802   45161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem (1675 bytes)
	I1210 06:45:27.840874   45161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-921183 san=[127.0.0.1 192.168.50.121 kubernetes-upgrade-921183 localhost minikube]
	I1210 06:45:27.956495   45161 provision.go:177] copyRemoteCerts
	I1210 06:45:27.956560   45161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:45:27.958885   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.959345   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:27.959384   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:27.959550   45161 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/kubernetes-upgrade-921183/id_rsa Username:docker}
	I1210 06:45:28.051133   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:45:28.088599   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 06:45:28.127586   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:45:28.168067   45161 provision.go:87] duration metric: took 336.3108ms to configureAuth
	I1210 06:45:28.168100   45161 buildroot.go:189] setting minikube options for container-runtime
	I1210 06:45:28.168383   45161 config.go:182] Loaded profile config "kubernetes-upgrade-921183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:45:28.171233   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:28.171790   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:28.171824   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:28.172020   45161 main.go:143] libmachine: Using SSH client type: native
	I1210 06:45:28.172210   45161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I1210 06:45:28.172229   45161 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:45:28.934413   45161 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:45:28.934441   45161 machine.go:97] duration metric: took 1.51874649s to provisionDockerMachine
	I1210 06:45:28.934454   45161 start.go:293] postStartSetup for "kubernetes-upgrade-921183" (driver="kvm2")
	I1210 06:45:28.934465   45161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:45:28.934551   45161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:45:28.938169   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:28.938736   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:28.938781   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:28.938989   45161 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/kubernetes-upgrade-921183/id_rsa Username:docker}
	I1210 06:45:29.032024   45161 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:45:29.037705   45161 info.go:137] Remote host: Buildroot 2025.02
	I1210 06:45:29.037742   45161 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/addons for local assets ...
	I1210 06:45:29.037815   45161 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/files for local assets ...
	I1210 06:45:29.037933   45161 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem -> 125882.pem in /etc/ssl/certs
	I1210 06:45:29.038094   45161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:45:29.051246   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem --> /etc/ssl/certs/125882.pem (1708 bytes)
	I1210 06:45:29.084796   45161 start.go:296] duration metric: took 150.324566ms for postStartSetup
	I1210 06:45:29.084846   45161 fix.go:56] duration metric: took 1.674242738s for fixHost
	I1210 06:45:29.088797   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:29.089401   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:29.089437   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:29.089680   45161 main.go:143] libmachine: Using SSH client type: native
	I1210 06:45:29.089949   45161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I1210 06:45:29.089963   45161 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 06:45:29.308143   45161 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765349129.257441956
	
	I1210 06:45:29.308180   45161 fix.go:216] guest clock: 1765349129.257441956
	I1210 06:45:29.308191   45161 fix.go:229] Guest: 2025-12-10 06:45:29.257441956 +0000 UTC Remote: 2025-12-10 06:45:29.084851545 +0000 UTC m=+1.893060050 (delta=172.590411ms)
	I1210 06:45:29.308213   45161 fix.go:200] guest clock delta is within tolerance: 172.590411ms
	I1210 06:45:29.308221   45161 start.go:83] releasing machines lock for "kubernetes-upgrade-921183", held for 1.897659729s
	I1210 06:45:29.312123   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:29.312655   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:29.312702   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:29.313420   45161 ssh_runner.go:195] Run: cat /version.json
	I1210 06:45:29.313535   45161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:45:29.317409   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:29.318305   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:29.318339   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:29.318406   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:29.318785   45161 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/kubernetes-upgrade-921183/id_rsa Username:docker}
	I1210 06:45:29.319546   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:45:29.319577   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:45:29.319749   45161 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/kubernetes-upgrade-921183/id_rsa Username:docker}
	I1210 06:45:29.510985   45161 ssh_runner.go:195] Run: systemctl --version
	I1210 06:45:29.528814   45161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:45:29.731772   45161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:45:29.749864   45161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:45:29.749947   45161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:45:29.778001   45161 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:45:29.778030   45161 start.go:496] detecting cgroup driver to use...
	I1210 06:45:29.778106   45161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:45:29.835133   45161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:45:29.894929   45161 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:45:29.895061   45161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:45:29.953874   45161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:45:30.011612   45161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:45:30.420431   45161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:45:30.716116   45161 docker.go:234] disabling docker service ...
	I1210 06:45:30.716189   45161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:45:30.750631   45161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:45:30.783798   45161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:45:31.044536   45161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:45:31.261001   45161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:45:31.279481   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:45:31.304308   45161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:45:31.304413   45161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:45:31.318300   45161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:45:31.318474   45161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:45:31.333036   45161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:45:31.347146   45161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:45:31.364956   45161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:45:31.380224   45161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:45:31.395904   45161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:45:31.423124   45161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:45:31.437339   45161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:45:31.449792   45161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:45:31.465654   45161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:45:31.632187   45161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:47:01.948667   45161 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.316429753s)
	I1210 06:47:01.948713   45161 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:47:01.948776   45161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:47:01.957026   45161 start.go:564] Will wait 60s for crictl version
	I1210 06:47:01.957107   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:47:01.962606   45161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 06:47:02.020695   45161 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 06:47:02.020788   45161 ssh_runner.go:195] Run: crio --version
	I1210 06:47:02.058583   45161 ssh_runner.go:195] Run: crio --version
	I1210 06:47:02.109895   45161 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1210 06:47:02.115120   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:47:02.115664   45161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:46:7d", ip: ""} in network mk-kubernetes-upgrade-921183: {Iface:virbr2 ExpiryTime:2025-12-10 07:45:09 +0000 UTC Type:0 Mac:52:54:00:e6:46:7d Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-921183 Clientid:01:52:54:00:e6:46:7d}
	I1210 06:47:02.115697   45161 main.go:143] libmachine: domain kubernetes-upgrade-921183 has defined IP address 192.168.50.121 and MAC address 52:54:00:e6:46:7d in network mk-kubernetes-upgrade-921183
	I1210 06:47:02.115943   45161 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 06:47:02.121800   45161 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-921183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-921183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.121 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:47:02.121929   45161 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:47:02.122013   45161 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:47:02.175831   45161 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:47:02.175860   45161 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:47:02.175941   45161 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:47:02.217854   45161 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:47:02.217883   45161 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:47:02.217893   45161 kubeadm.go:935] updating node { 192.168.50.121 8443 v1.35.0-beta.0 crio true true} ...
	I1210 06:47:02.218084   45161 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-921183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-921183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:47:02.218193   45161 ssh_runner.go:195] Run: crio config
	I1210 06:47:02.293165   45161 cni.go:84] Creating CNI manager for ""
	I1210 06:47:02.293199   45161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:47:02.293221   45161 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:47:02.293258   45161 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.121 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-921183 NodeName:kubernetes-upgrade-921183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:47:02.293478   45161 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-921183"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:47:02.293673   45161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:47:02.309056   45161 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:47:02.309138   45161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:47:02.325118   45161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1210 06:47:02.357393   45161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:47:02.387632   45161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 06:47:02.413711   45161 ssh_runner.go:195] Run: grep 192.168.50.121	control-plane.minikube.internal$ /etc/hosts
	I1210 06:47:02.420900   45161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:47:02.676605   45161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:47:02.706349   45161 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183 for IP: 192.168.50.121
	I1210 06:47:02.706410   45161 certs.go:195] generating shared ca certs ...
	I1210 06:47:02.706458   45161 certs.go:227] acquiring lock for ca certs: {Name:mkbf1082c8328cc7c1360f5f8b344958e8aa5792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:47:02.706658   45161 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key
	I1210 06:47:02.706756   45161 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key
	I1210 06:47:02.706771   45161 certs.go:257] generating profile certs ...
	I1210 06:47:02.706941   45161 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183/client.key
	I1210 06:47:02.707043   45161 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183/apiserver.key.a1c29551
	I1210 06:47:02.707112   45161 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183/proxy-client.key
	I1210 06:47:02.707276   45161 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588.pem (1338 bytes)
	W1210 06:47:02.707332   45161 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588_empty.pem, impossibly tiny 0 bytes
	I1210 06:47:02.707344   45161 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:47:02.707397   45161 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:47:02.707434   45161 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:47:02.707474   45161 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem (1675 bytes)
	I1210 06:47:02.707552   45161 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem (1708 bytes)
	I1210 06:47:02.708632   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:47:02.754542   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:47:02.796867   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:47:02.834157   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:47:02.875646   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 06:47:02.913831   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:47:02.953528   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:47:02.998915   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:47:03.043217   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem --> /usr/share/ca-certificates/125882.pem (1708 bytes)
	I1210 06:47:03.086395   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:47:03.127030   45161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588.pem --> /usr/share/ca-certificates/12588.pem (1338 bytes)
	I1210 06:47:03.166772   45161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:47:03.199271   45161 ssh_runner.go:195] Run: openssl version
	I1210 06:47:03.207896   45161 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:47:03.225121   45161 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:47:03.239717   45161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:47:03.248069   45161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:47:03.248172   45161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:47:03.260838   45161 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:47:03.274997   45161 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12588.pem
	I1210 06:47:03.293707   45161 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12588.pem /etc/ssl/certs/12588.pem
	I1210 06:47:03.310945   45161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12588.pem
	I1210 06:47:03.319286   45161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:56 /usr/share/ca-certificates/12588.pem
	I1210 06:47:03.319386   45161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12588.pem
	I1210 06:47:03.330050   45161 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:47:03.344842   45161 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/125882.pem
	I1210 06:47:03.368596   45161 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/125882.pem /etc/ssl/certs/125882.pem
	I1210 06:47:03.385386   45161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125882.pem
	I1210 06:47:03.393103   45161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:56 /usr/share/ca-certificates/125882.pem
	I1210 06:47:03.393178   45161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125882.pem
	I1210 06:47:03.401185   45161 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:47:03.414143   45161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:47:03.420401   45161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:47:03.428288   45161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:47:03.436587   45161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:47:03.445050   45161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:47:03.453621   45161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:47:03.462178   45161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:47:03.469926   45161 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-921183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-921183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.121 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:47:03.470008   45161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:47:03.470063   45161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:47:03.509966   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:47:03.510003   45161 cri.go:89] found id: "4ccedf845bbbfc6c91cab9794ee36bdd220b7e2bc5f3738c11b7a254ea9ec6e6"
	I1210 06:47:03.510010   45161 cri.go:89] found id: "37a7e7bdf5e48b1a47760bbd11927ae62224044d4a7ccc862d4ca227275d4b44"
	I1210 06:47:03.510015   45161 cri.go:89] found id: "afefcbb856b0f08a91f49a8c3a8439b89e5c993831421e016c4fdc2c3bc29763"
	I1210 06:47:03.510019   45161 cri.go:89] found id: "4217c802c1c1e6b3d68179813d468a5e4bf36725a69152327a31bec133b17292"
	I1210 06:47:03.510023   45161 cri.go:89] found id: "58b5d044572b357b41fb06eb948db19540c6bad2a93a332e074b1a6c91bdc3c8"
	I1210 06:47:03.510028   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:47:03.510032   45161 cri.go:89] found id: ""
	I1210 06:47:03.510091   45161 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:47:03.562669   45161 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4/userdata","rootfs":"/var/lib/containers/storage/overlay/b98554a2aecda6e486323e897e46322c63fbd0a2ab07c752a8fb3d5b395af5fc/merged","created":"2025-12-10T06:45:21.566990233Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-12-10T06:45:20.843759610Z\",\"kubernetes.io/config.hash\":\"5bf81a03cc94837314d4e0f67906143e\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod5bf81a03cc94837314d4e0f67906143e","io.kubernetes.cri-o.ContainerID":"25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4","io.kubernetes.cri-o.ContainerName":"
k8s_POD_kube-scheduler-kubernetes-upgrade-921183_kube-system_5bf81a03cc94837314d4e0f67906143e_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-12-10T06:45:21.421618249Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-921183","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-921183","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5bf81a03cc94837314d4e0f67906143e\",\"component\":\"kube-scheduler\",\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-921183\"}","io.kubernetes.cri-o.LogPath":"/var/lo
g/pods/kube-system_kube-scheduler-kubernetes-upgrade-921183_5bf81a03cc94837314d4e0f67906143e/25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-921183\",\"uid\":\"5bf81a03cc94837314d4e0f67906143e\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b98554a2aecda6e486323e897e46322c63fbd0a2ab07c752a8fb3d5b395af5fc/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-921183_kube-system_5bf81a03cc94837314d4e0f67906143e_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":10259,\"ContainerPort\":10259,\"Protocol\":\"TCP\",\"HostIP\":\"\"}
]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-921183_kube-system_5bf81a03cc94837314d4e0f67906143e_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5bf81a03cc94837314d4e0f67906143e","kubernetes.io/config.hash":"5bf81a03cc94837314d4e0f67906143e","kubernetes.io/config.seen":"2025-12-10T06:45:20.843759610Z","kubernetes.io/config.source":"fi
le","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"37a7e7bdf5e48b1a47760bbd11927ae62224044d4a7ccc862d4ca227275d4b44","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/37a7e7bdf5e48b1a47760bbd11927ae62224044d4a7ccc862d4ca227275d4b44/userdata","rootfs":"/var/lib/containers/storage/overlay/1ce71244ae49d7de9a9563fb35a284f26c596093ea4230cdf324bc254f1f2b11/merged","created":"2025-12-10T06:45:29.591241686Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a67ffa3","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a67ffa3\",\"io.kubernetes.container.ports\":\"[{\\\"nam
e\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"37a7e7bdf5e48b1a47760bbd11927ae62224044d4a7ccc862d4ca227275d4b44","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-10T06:45:29.455197637Z","io.kubernetes.cri-o.Image":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0","io.kubernetes.cri-o.ImageRef":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-921183\",\"io.kube
rnetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4e712a5fdc89fc99f70c61173f5b6644\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-921183_4e712a5fdc89fc99f70c61173f5b6644/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1ce71244ae49d7de9a9563fb35a284f26c596093ea4230cdf324bc254f1f2b11/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-921183_kube-system_4e712a5fdc89fc99f70c61173f5b6644_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller
-manager-kubernetes-upgrade-921183_kube-system_4e712a5fdc89fc99f70c61173f5b6644_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4e712a5fdc89fc99f70c61173f5b6644/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4e712a5fdc89fc99f70c61173f5b6644/containers/kube-controller-manager/852b711c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_
path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4e712a5fdc89fc99f70c61173f5b6644","kubernetes.io/config.hash":"4e712a5fdc89fc99f70c61173f5b6644","kubernetes.io/config.seen":"2025-12-10T06:45:20.843758769Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f","pid":0,"stat
us":"stopped","bundle":"/run/containers/storage/overlay-containers/4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f/userdata","rootfs":"/var/lib/containers/storage/overlay/c805c902e1f83c048fedb42c3568656e605bfec54cd6d3f7525d81b6003fc53a/merged","created":"2025-12-10T06:45:21.531943536Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-12-10T06:45:20.843758769Z\",\"kubernetes.io/config.hash\":\"4e712a5fdc89fc99f70c61173f5b6644\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod4e712a5fdc89fc99f70c61173f5b6644","io.kubernetes.cri-o.ContainerID":"4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-921183_kube-system_4e712a5fdc89fc99f70c61173f5b6644_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.k
ubernetes.cri-o.Created":"2025-12-10T06:45:21.442240206Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-921183","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-921183","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"4e712a5fdc89fc99f70c61173f5b6644\",\"component\":\"kube-controller-manager\",\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-921183\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-921183_4e712a5fdc89fc99f70c61173f5b6644/4077ef15855d23ce179
51d1204d1ff78be48186907256ddd947e9c4b46a4610f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-921183\",\"uid\":\"4e712a5fdc89fc99f70c61173f5b6644\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c805c902e1f83c048fedb42c3568656e605bfec54cd6d3f7525d81b6003fc53a/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-921183_kube-system_4e712a5fdc89fc99f70c61173f5b6644_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":10257,\"ContainerPort\":10257,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/c
ontainers/storage/overlay-containers/4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-921183_kube-system_4e712a5fdc89fc99f70c61173f5b6644_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4e712a5fdc89fc99f70c61173f5b6644","kubernetes.io/config.hash":"4e712a5fdc89fc99f70c61173f5b6644","kubernetes.io/config.seen":"2025-12-10T06:45:20.843758769Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"4217
c802c1c1e6b3d68179813d468a5e4bf36725a69152327a31bec133b17292","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4217c802c1c1e6b3d68179813d468a5e4bf36725a69152327a31bec133b17292/userdata","rootfs":"/var/lib/containers/storage/overlay/bfcb88ea24cf610b632eb16f80c4049b428d2ea7b4eae601e55b56d5cc06a942/merged","created":"2025-12-10T06:45:21.784036092Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a67ffa3","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a67ffa3\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,
\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4217c802c1c1e6b3d68179813d468a5e4bf36725a69152327a31bec133b17292","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-10T06:45:21.69396777Z","io.kubernetes.cri-o.Image":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0","io.kubernetes.cri-o.ImageRef":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-921183\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4e712a5fdc
89fc99f70c61173f5b6644\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-921183_4e712a5fdc89fc99f70c61173f5b6644/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bfcb88ea24cf610b632eb16f80c4049b428d2ea7b4eae601e55b56d5cc06a942/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-921183_kube-system_4e712a5fdc89fc99f70c61173f5b6644_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-921183_kube-system_4e712a5fdc89fc99f70c61173f5b6644_0","io.kube
rnetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4e712a5fdc89fc99f70c61173f5b6644/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4e712a5fdc89fc99f70c61173f5b6644/containers/kube-controller-manager/5862fd1f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readon
ly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4e712a5fdc89fc99f70c61173f5b6644","kubernetes.io/config.hash":"4e712a5fdc89fc99f70c61173f5b6644","kubernetes.io/config.seen":"2025-12-10T06:45:20.843758769Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"4ccedf845bbbfc6c91cab9794ee36bdd220b7e2bc5f3738c11b7a254ea9ec6e6","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4ccedf845bbbfc6c91cab979
4ee36bdd220b7e2bc5f3738c11b7a254ea9ec6e6/userdata","rootfs":"/var/lib/containers/storage/overlay/a9f3eb7d89a3a7e71e398d40ce74f80e8c8562286a4a5fda12011ae713965121/merged","created":"2025-12-10T06:45:29.745191139Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b11f11f1","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b11f11f1\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.conta
iner.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4ccedf845bbbfc6c91cab9794ee36bdd220b7e2bc5f3738c11b7a254ea9ec6e6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-10T06:45:29.526738661Z","io.kubernetes.cri-o.Image":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.35.0-beta.0","io.kubernetes.cri-o.ImageRef":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-921183\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5676ebdfcb3390f5d5962ea2906e4aa5\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-921183_5676ebdfcb3390f5d5962ea2906e4aa5/kube-apiserver/1.log","io.kubernetes.cri-o.Me
tadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a9f3eb7d89a3a7e71e398d40ce74f80e8c8562286a4a5fda12011ae713965121/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-921183_kube-system_5676ebdfcb3390f5d5962ea2906e4aa5_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-921183_kube-system_5676ebdfcb3390f5d5962ea2906e4aa5_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"
/var/lib/kubelet/pods/5676ebdfcb3390f5d5962ea2906e4aa5/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5676ebdfcb3390f5d5962ea2906e4aa5/containers/kube-apiserver/bb0a3b00\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5676ebdfcb3390f5d5962ea2906e4aa5","kubeadm
.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.121:8443","kubernetes.io/config.hash":"5676ebdfcb3390f5d5962ea2906e4aa5","kubernetes.io/config.seen":"2025-12-10T06:45:20.843757796Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6/userdata","rootfs":"/var/lib/containers/storage/overlay/2295ac6f917d87cb9a32e137201bd4ee4b8ae6827e499b46121430b9a8a0b58f/merged","created":"2025-12-10T06:45:21.718149535Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5a6992ae","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termina
tion-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5a6992ae\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-10T06:45:21.630558753Z","io.kubernetes.cri-o.Image":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.5-0","io.kubernetes.cri-o.ImageRef":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","io.kubernetes.
cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-921183\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb9c6b716ed479be7c5eb11a56ebe61a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-921183_fb9c6b716ed479be7c5eb11a56ebe61a/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2295ac6f917d87cb9a32e137201bd4ee4b8ae6827e499b46121430b9a8a0b58f/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d","io.kubernetes.cri-o.SandboxName":
"k8s_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fb9c6b716ed479be7c5eb11a56ebe61a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fb9c6b716ed479be7c5eb11a56ebe61a/containers/etcd/7a0673b2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ku
bernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fb9c6b716ed479be7c5eb11a56ebe61a","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.121:2379","kubernetes.io/config.hash":"fb9c6b716ed479be7c5eb11a56ebe61a","kubernetes.io/config.seen":"2025-12-10T06:45:20.843754680Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"58b5d044572b357b41fb06eb948db19540c6bad2a93a332e074b1a6c91bdc3c8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/58b5d044572b357b41fb06eb948db19540c6bad2a93a332e074b1a6c91bdc3c8/userdata","rootfs":"/var/lib/containers/storage/overlay/e534d78569237a63f0bf00cee703284f7211fb7c0d45d506a966aceae8ad23a1/merged","created":"2025-12-10T06:45:21.721734041Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b11f11f1","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\
":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b11f11f1\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"58b5d044572b357b41fb06eb948db19540c6bad2a93a332e074b1a6c91bdc3c8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-10T06:45:21.644968549Z","io.kubernetes.cri-o.Image":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b51
2b67cb52b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.35.0-beta.0","io.kubernetes.cri-o.ImageRef":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-921183\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5676ebdfcb3390f5d5962ea2906e4aa5\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-921183_5676ebdfcb3390f5d5962ea2906e4aa5/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e534d78569237a63f0bf00cee703284f7211fb7c0d45d506a966aceae8ad23a1/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-921183_kube-system_5676ebdfcb3390f5d5962ea2906e4aa5_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.Resolv
Path":"/var/run/containers/storage/overlay-containers/bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-921183_kube-system_5676ebdfcb3390f5d5962ea2906e4aa5_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5676ebdfcb3390f5d5962ea2906e4aa5/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5676ebdfcb3390f5d5962ea2906e4aa5/containers/kube-apiserver/27e1f8b2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/e
tc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5676ebdfcb3390f5d5962ea2906e4aa5","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.121:8443","kubernetes.io/config.hash":"5676ebdfcb3390f5d5962ea2906e4aa5","kubernetes.io/config.seen":"2025-12-10T06:45:20.843757796Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b","pid":0,"status":"stopped","bundle":"/run/containers/
storage/overlay-containers/71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b/userdata","rootfs":"/var/lib/containers/storage/overlay/3b986cee6bd949e9b2f129e2be9d846e6230dbc00a2f13d15c6c4213a3cd6b61/merged","created":"2025-12-10T06:45:29.19975186Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.seen\":\"2025-12-10T06:45:20.843758769Z\",\"kubernetes.io/config.hash\":\"4e712a5fdc89fc99f70c61173f5b6644\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod4e712a5fdc89fc99f70c61173f5b6644","io.kubernetes.cri-o.ContainerID":"71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-921183_kube-system_4e712a5fdc89fc99f70c61173f5b6644_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-12-10T06:4
5:29.098900111Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-921183","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-921183","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"4e712a5fdc89fc99f70c61173f5b6644\",\"component\":\"kube-controller-manager\",\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-921183\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-921183_4e712a5fdc89fc99f70c61173f5b6644/71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735
f83b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-921183\",\"uid\":\"4e712a5fdc89fc99f70c61173f5b6644\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b986cee6bd949e9b2f129e2be9d846e6230dbc00a2f13d15c6c4213a3cd6b61/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-921183_kube-system_4e712a5fdc89fc99f70c61173f5b6644_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":10257,\"ContainerPort\":10257,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-c
ontainers/71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-921183_kube-system_4e712a5fdc89fc99f70c61173f5b6644_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4e712a5fdc89fc99f70c61173f5b6644","kubernetes.io/config.hash":"4e712a5fdc89fc99f70c61173f5b6644","kubernetes.io/config.seen":"2025-12-10T06:45:20.843758769Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"8d1028dffad16d97892b198eb725bcb
56175432924156a0048a287f418032829","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829/userdata","rootfs":"/var/lib/containers/storage/overlay/18b7b341fa7e2bfcd17cf2290a0b39c24f36a1f5efe99956a30d81228bcc4eb7/merged","created":"2025-12-10T06:45:29.745402856Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bf369231","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bf369231\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\
"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-10T06:45:29.559909361Z","io.kubernetes.cri-o.Image":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.35.0-beta.0","io.kubernetes.cri-o.ImageRef":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-921183\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5bf81a03cc94837314d4e0f67906143e\"}","io.kubernetes.cri-o.LogPath":"/v
ar/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-921183_5bf81a03cc94837314d4e0f67906143e/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/18b7b341fa7e2bfcd17cf2290a0b39c24f36a1f5efe99956a30d81228bcc4eb7/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-921183_kube-system_5bf81a03cc94837314d4e0f67906143e_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-921183_kube-system_5bf81a03cc94837314d4e0f67906143e_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes
.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5bf81a03cc94837314d4e0f67906143e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5bf81a03cc94837314d4e0f67906143e/containers/kube-scheduler/d618e588\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5bf81a03cc94837314d4e0f67906143e","kubernetes.io/config.hash":"5bf81a03cc94837314d4e0f67906143e","kubernetes.io/config.seen":"2025-12-10T06:45:20.843759610Z","kubernetes.io/config
.source":"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"afefcbb856b0f08a91f49a8c3a8439b89e5c993831421e016c4fdc2c3bc29763","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/afefcbb856b0f08a91f49a8c3a8439b89e5c993831421e016c4fdc2c3bc29763/userdata","rootfs":"/var/lib/containers/storage/overlay/bf1e2572a3f8c7bcca83e083de573c55c3cc008949101cfdf24ec29fbd488446/merged","created":"2025-12-10T06:45:21.772504117Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bf369231","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bf369231\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-po
rt\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"afefcbb856b0f08a91f49a8c3a8439b89e5c993831421e016c4fdc2c3bc29763","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-10T06:45:21.70118123Z","io.kubernetes.cri-o.Image":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.35.0-beta.0","io.kubernetes.cri-o.ImageRef":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-921183\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.ku
bernetes.pod.uid\":\"5bf81a03cc94837314d4e0f67906143e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-921183_5bf81a03cc94837314d4e0f67906143e/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bf1e2572a3f8c7bcca83e083de573c55c3cc008949101cfdf24ec29fbd488446/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-921183_kube-system_5bf81a03cc94837314d4e0f67906143e_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-921183_kube-system_5bf81a03cc94837314d4e0f67906143e_0","io.kubernetes.cri-o.SeccompPro
filePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5bf81a03cc94837314d4e0f67906143e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5bf81a03cc94837314d4e0f67906143e/containers/kube-scheduler/96d9c0e3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5bf81a03cc94837314d4e0f67906143e","kubernetes.io/config.hash":"5bf81a03cc94837314d4e0f67906143e","kube
rnetes.io/config.seen":"2025-12-10T06:45:20.843759610Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6/userdata","rootfs":"/var/lib/containers/storage/overlay/2feb00e98bc06db338a1748b205ab717224df0b1923a3789166c00aeee74c1be/merged","created":"2025-12-10T06:45:29.330503313Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-12-10T06:45:20.843759610Z\",\"kubernetes.io/config.hash\":\"5bf81a03cc94837314d4e0f67906143e\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod5bf81a03cc94837314d4e0f67906143e","io.kubernetes.cri-o.ContainerID":"b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e172747
53f65cafffb6","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-kubernetes-upgrade-921183_kube-system_5bf81a03cc94837314d4e0f67906143e_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-12-10T06:45:29.121000293Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-921183","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-921183","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-921183\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5bf81a03cc94837314d4e0f67906143e\",\"component\":\"kube-scheduler\",\"tier\":\"control-plane\",\"io.kubernetes.container.name
\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-921183_5bf81a03cc94837314d4e0f67906143e/b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-921183\",\"uid\":\"5bf81a03cc94837314d4e0f67906143e\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2feb00e98bc06db338a1748b205ab717224df0b1923a3789166c00aeee74c1be/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-921183_kube-system_5bf81a03cc94837314d4e0f67906143e_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":10
259,\"ContainerPort\":10259,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-921183_kube-system_5bf81a03cc94837314d4e0f67906143e_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5bf81a03cc94837314d4e0f67906143e","kubernetes.io/config.hash":"5bf81a03cc94837314d4e0f67906143e","kubernetes.io/config.seen":"
2025-12-10T06:45:20.843759610Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929/userdata","rootfs":"/var/lib/containers/storage/overlay/dc329ac15cf2f78a662d496defaccc3ef33ac0646c7355b163f56fc1a5e1b88e/merged","created":"2025-12-10T06:45:21.531501823Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-12-10T06:45:20.843757796Z\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.50.121:8443\",\"kubernetes.io/config.hash\":\"5676ebdfcb3390f5d5962ea2906e4aa5\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod5676ebdfcb3390f5d5962ea2906e4a
a5","io.kubernetes.cri-o.ContainerID":"bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-921183_kube-system_5676ebdfcb3390f5d5962ea2906e4aa5_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-12-10T06:45:21.387880935Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-921183","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-921183","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"5676ebdfcb3390f5d5962ea2906e4aa5\",\"component\":\"kube-apiserver
\",\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-921183\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-921183_5676ebdfcb3390f5d5962ea2906e4aa5/bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-921183\",\"uid\":\"5676ebdfcb3390f5d5962ea2906e4aa5\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dc329ac15cf2f78a662d496defaccc3ef33ac0646c7355b163f56fc1a5e1b88e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-921183_kube-system_5676ebdfcb3390f5d5962ea2906e4aa5_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memo
ry.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":8443,\"ContainerPort\":8443,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-921183_kube-system_5676ebdfcb3390f5d5962ea2906e4aa5_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5676ebdfcb3390f5d5962ea2906e4aa5","kubeadm.kubernet
es.io/kube-apiserver.advertise-address.endpoint":"192.168.50.121:8443","kubernetes.io/config.hash":"5676ebdfcb3390f5d5962ea2906e4aa5","kubernetes.io/config.seen":"2025-12-10T06:45:20.843757796Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded/userdata","rootfs":"/var/lib/containers/storage/overlay/46472f74f885930b3d78333e30e912bdea2b9c88b64fa7fa409d15bc1503134d/merged","created":"2025-12-10T06:45:29.295155191Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-12-10T06:45:20.843757796Z\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.50.121:8443\",\"kubernetes.io/config.hash\":\"5676
ebdfcb3390f5d5962ea2906e4aa5\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod5676ebdfcb3390f5d5962ea2906e4aa5","io.kubernetes.cri-o.ContainerID":"df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-921183_kube-system_5676ebdfcb3390f5d5962ea2906e4aa5_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-12-10T06:45:29.136286003Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-921183","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-921183","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod
.uid\":\"5676ebdfcb3390f5d5962ea2906e4aa5\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"kube-apiserver\",\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-921183\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-921183_5676ebdfcb3390f5d5962ea2906e4aa5/df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-921183\",\"uid\":\"5676ebdfcb3390f5d5962ea2906e4aa5\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/46472f74f885930b3d78333e30e912bdea2b9c88b64fa7fa409d15bc1503134d/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-921183_kube-system_5676ebdfcb3390f5d5962ea2906e4aa5_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"u
serns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":8443,\"ContainerPort\":8443,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-921183_kube-system_5676ebdfcb3390f5d5962ea2906e4aa5_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded/userdata/shm","io.kuberne
tes.pod.name":"kube-apiserver-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5676ebdfcb3390f5d5962ea2906e4aa5","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.121:8443","kubernetes.io/config.hash":"5676ebdfcb3390f5d5962ea2906e4aa5","kubernetes.io/config.seen":"2025-12-10T06:45:20.843757796Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d/userdata","rootfs":"/var/lib/containers/storage/overlay/ef41ff05b54f19fb26d9f8e791a9c7d6de09cbe75bc69e85723e702a1235dae3/merged","created":"2025-12-10T06:45:21.47880224Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.sour
ce\":\"file\",\"kubernetes.io/config.seen\":\"2025-12-10T06:45:20.843754680Z\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.50.121:2379\",\"kubernetes.io/config.hash\":\"fb9c6b716ed479be7c5eb11a56ebe61a\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podfb9c6b716ed479be7c5eb11a56ebe61a","io.kubernetes.cri-o.ContainerID":"e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-12-10T06:45:21.385508722Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-921183","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"regis
try.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-921183","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-921183\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"fb9c6b716ed479be7c5eb11a56ebe61a\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-921183_fb9c6b716ed479be7c5eb11a56ebe61a/e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-921183\",\"uid\":\"fb9c6b716ed479be7c5eb11a56ebe61a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ef41ff05b54f19fb26d9f8e791a9c7d6de09cbe75bc69e85723e702a1235dae3/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_0","io.kubernetes.cri-o.Namespace":"kube-s
ystem","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":2381,\"ContainerPort\":2381,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e229cc26556b026a412e8b1e0
422afc5e07a23f85f25c3c117d5ebcf8efd689d/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"fb9c6b716ed479be7c5eb11a56ebe61a","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.121:2379","kubernetes.io/config.hash":"fb9c6b716ed479be7c5eb11a56ebe61a","kubernetes.io/config.seen":"2025-12-10T06:45:20.843754680Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"f5bbb910bca980eced30d490394c6d98e87403ce250358e8d11f46216fc67780","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f5bbb910bca980eced30d490394c6d98e87403ce250358e8d11f46216fc67780/userdata","rootfs":"/var/lib/containers/storage/overlay/def3be5734227cad58ac2a42b7a6fc696885bca5d64c3fafc08a45781229b4df/merged","created":"2025-12-10T06:45:29.305896902Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes
.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.50.121:2379\",\"kubernetes.io/config.hash\":\"fb9c6b716ed479be7c5eb11a56ebe61a\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.seen\":\"2025-12-10T06:45:20.843754680Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podfb9c6b716ed479be7c5eb11a56ebe61a","io.kubernetes.cri-o.ContainerID":"f5bbb910bca980eced30d490394c6d98e87403ce250358e8d11f46216fc67780","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-12-10T06:45:29.129745903Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-921183","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f5bbb910bca980eced30d490394c6d98e87403ce250358e8d11f46216fc67780/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pau
se:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-921183","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb9c6b716ed479be7c5eb11a56ebe61a\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-921183\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-921183_fb9c6b716ed479be7c5eb11a56ebe61a/f5bbb910bca980eced30d490394c6d98e87403ce250358e8d11f46216fc67780.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-921183\",\"uid\":\"fb9c6b716ed479be7c5eb11a56ebe61a\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/def3be5734227cad58ac2a42b7a6fc696885bca5d64c3fafc08a45781229b4df/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed4
79be7c5eb11a56ebe61a_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":2381,\"ContainerPort\":2381,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f5bbb910bca980eced30d490394c6d98e87403ce250358e8d11f46216fc67780/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f5bbb910bca980eced30d490394c6d98e87403ce250358e8d11f46216fc67780","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/
containers/storage/overlay-containers/f5bbb910bca980eced30d490394c6d98e87403ce250358e8d11f46216fc67780/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-921183","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"fb9c6b716ed479be7c5eb11a56ebe61a","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.121:2379","kubernetes.io/config.hash":"fb9c6b716ed479be7c5eb11a56ebe61a","kubernetes.io/config.seen":"2025-12-10T06:45:20.843754680Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"}]
	I1210 06:47:03.563668   45161 cri.go:126] list returned 15 containers
	I1210 06:47:03.563696   45161 cri.go:129] container: {ID:25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4 Status:stopped}
	I1210 06:47:03.563730   45161 cri.go:131] skipping 25cfba311bf8fd4fc1ef8e775b5b87aaf2dc1520a3f0c9d70a55cb9c0c4898b4 - not in ps
	I1210 06:47:03.563737   45161 cri.go:129] container: {ID:37a7e7bdf5e48b1a47760bbd11927ae62224044d4a7ccc862d4ca227275d4b44 Status:stopped}
	I1210 06:47:03.563746   45161 cri.go:135] skipping {37a7e7bdf5e48b1a47760bbd11927ae62224044d4a7ccc862d4ca227275d4b44 stopped}: state = "stopped", want "paused"
	I1210 06:47:03.563759   45161 cri.go:129] container: {ID:4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f Status:stopped}
	I1210 06:47:03.563768   45161 cri.go:131] skipping 4077ef15855d23ce17951d1204d1ff78be48186907256ddd947e9c4b46a4610f - not in ps
	I1210 06:47:03.563779   45161 cri.go:129] container: {ID:4217c802c1c1e6b3d68179813d468a5e4bf36725a69152327a31bec133b17292 Status:stopped}
	I1210 06:47:03.563793   45161 cri.go:135] skipping {4217c802c1c1e6b3d68179813d468a5e4bf36725a69152327a31bec133b17292 stopped}: state = "stopped", want "paused"
	I1210 06:47:03.563805   45161 cri.go:129] container: {ID:4ccedf845bbbfc6c91cab9794ee36bdd220b7e2bc5f3738c11b7a254ea9ec6e6 Status:stopped}
	I1210 06:47:03.563817   45161 cri.go:135] skipping {4ccedf845bbbfc6c91cab9794ee36bdd220b7e2bc5f3738c11b7a254ea9ec6e6 stopped}: state = "stopped", want "paused"
	I1210 06:47:03.563825   45161 cri.go:129] container: {ID:522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6 Status:stopped}
	I1210 06:47:03.563838   45161 cri.go:135] skipping {522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6 stopped}: state = "stopped", want "paused"
	I1210 06:47:03.563849   45161 cri.go:129] container: {ID:58b5d044572b357b41fb06eb948db19540c6bad2a93a332e074b1a6c91bdc3c8 Status:stopped}
	I1210 06:47:03.563863   45161 cri.go:135] skipping {58b5d044572b357b41fb06eb948db19540c6bad2a93a332e074b1a6c91bdc3c8 stopped}: state = "stopped", want "paused"
	I1210 06:47:03.563872   45161 cri.go:129] container: {ID:71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b Status:stopped}
	I1210 06:47:03.563879   45161 cri.go:131] skipping 71a98a30c06ffe5c61f8bdebddfc957444c21bf920b1f997925156501735f83b - not in ps
	I1210 06:47:03.563887   45161 cri.go:129] container: {ID:8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829 Status:stopped}
	I1210 06:47:03.563898   45161 cri.go:135] skipping {8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829 stopped}: state = "stopped", want "paused"
	I1210 06:47:03.563907   45161 cri.go:129] container: {ID:afefcbb856b0f08a91f49a8c3a8439b89e5c993831421e016c4fdc2c3bc29763 Status:stopped}
	I1210 06:47:03.563914   45161 cri.go:135] skipping {afefcbb856b0f08a91f49a8c3a8439b89e5c993831421e016c4fdc2c3bc29763 stopped}: state = "stopped", want "paused"
	I1210 06:47:03.563924   45161 cri.go:129] container: {ID:b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6 Status:stopped}
	I1210 06:47:03.563930   45161 cri.go:131] skipping b092ff76aa58e31720dfa1c9655ac1786ef2508fd81c4e17274753f65cafffb6 - not in ps
	I1210 06:47:03.563935   45161 cri.go:129] container: {ID:bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929 Status:stopped}
	I1210 06:47:03.563944   45161 cri.go:131] skipping bbd86fdc2efe7d36e7cc5f0637c336a65e751ef3f8dffd270395eea023825929 - not in ps
	I1210 06:47:03.563949   45161 cri.go:129] container: {ID:df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded Status:stopped}
	I1210 06:47:03.563958   45161 cri.go:131] skipping df397c73517a77432377ee8cae38318ef16fc58d0ee5acb407118b6a6bc1eded - not in ps
	I1210 06:47:03.563963   45161 cri.go:129] container: {ID:e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d Status:stopped}
	I1210 06:47:03.563969   45161 cri.go:131] skipping e229cc26556b026a412e8b1e0422afc5e07a23f85f25c3c117d5ebcf8efd689d - not in ps
	I1210 06:47:03.563977   45161 cri.go:129] container: {ID:f5bbb910bca980eced30d490394c6d98e87403ce250358e8d11f46216fc67780 Status:stopped}
	I1210 06:47:03.563984   45161 cri.go:131] skipping f5bbb910bca980eced30d490394c6d98e87403ce250358e8d11f46216fc67780 - not in ps
	I1210 06:47:03.564051   45161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:47:03.579029   45161 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:47:03.579059   45161 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:47:03.579125   45161 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:47:03.594477   45161 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:47:03.595373   45161 kubeconfig.go:125] found "kubernetes-upgrade-921183" server: "https://192.168.50.121:8443"
	I1210 06:47:03.596409   45161 kapi.go:59] client config for kubernetes-upgrade-921183: &rest.Config{Host:"https://192.168.50.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kubernetes-upgrade-921183/client.key", CAFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:47:03.596983   45161 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:47:03.597003   45161 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:47:03.597010   45161 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:47:03.597015   45161 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:47:03.597021   45161 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:47:03.597545   45161 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:47:03.611776   45161 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.50.121
	I1210 06:47:03.611839   45161 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:47:03.611856   45161 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 06:47:03.611919   45161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:47:03.656654   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:47:03.656683   45161 cri.go:89] found id: "4ccedf845bbbfc6c91cab9794ee36bdd220b7e2bc5f3738c11b7a254ea9ec6e6"
	I1210 06:47:03.656689   45161 cri.go:89] found id: "37a7e7bdf5e48b1a47760bbd11927ae62224044d4a7ccc862d4ca227275d4b44"
	I1210 06:47:03.656694   45161 cri.go:89] found id: "afefcbb856b0f08a91f49a8c3a8439b89e5c993831421e016c4fdc2c3bc29763"
	I1210 06:47:03.656699   45161 cri.go:89] found id: "4217c802c1c1e6b3d68179813d468a5e4bf36725a69152327a31bec133b17292"
	I1210 06:47:03.656704   45161 cri.go:89] found id: "58b5d044572b357b41fb06eb948db19540c6bad2a93a332e074b1a6c91bdc3c8"
	I1210 06:47:03.656709   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:47:03.656713   45161 cri.go:89] found id: ""
	I1210 06:47:03.656719   45161 cri.go:252] Stopping containers: [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829 4ccedf845bbbfc6c91cab9794ee36bdd220b7e2bc5f3738c11b7a254ea9ec6e6 37a7e7bdf5e48b1a47760bbd11927ae62224044d4a7ccc862d4ca227275d4b44 afefcbb856b0f08a91f49a8c3a8439b89e5c993831421e016c4fdc2c3bc29763 4217c802c1c1e6b3d68179813d468a5e4bf36725a69152327a31bec133b17292 58b5d044572b357b41fb06eb948db19540c6bad2a93a332e074b1a6c91bdc3c8 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:47:03.656805   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:47:03.661857   45161 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829 4ccedf845bbbfc6c91cab9794ee36bdd220b7e2bc5f3738c11b7a254ea9ec6e6 37a7e7bdf5e48b1a47760bbd11927ae62224044d4a7ccc862d4ca227275d4b44 afefcbb856b0f08a91f49a8c3a8439b89e5c993831421e016c4fdc2c3bc29763 4217c802c1c1e6b3d68179813d468a5e4bf36725a69152327a31bec133b17292 58b5d044572b357b41fb06eb948db19540c6bad2a93a332e074b1a6c91bdc3c8 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6
	I1210 06:47:03.758195   45161 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:47:03.813445   45161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:47:03.833930   45161 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 10 06:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Dec 10 06:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5722 Dec 10 06:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Dec 10 06:45 /etc/kubernetes/scheduler.conf
	
	I1210 06:47:03.834001   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:47:03.846066   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:47:03.857401   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:47:03.857472   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:47:03.873302   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:47:03.885999   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:47:03.886070   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:47:03.898967   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:47:03.914011   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:47:03.914087   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:47:03.926688   45161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:47:03.939654   45161 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:47:04.004159   45161 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:47:04.592602   45161 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:47:04.892233   45161 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:47:04.957173   45161 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:47:05.019870   45161 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:47:05.019977   45161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:47:05.520098   45161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:47:06.020716   45161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:47:06.520927   45161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:47:07.020127   45161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:47:07.521006   45161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:47:08.020820   45161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:47:08.048807   45161 api_server.go:72] duration metric: took 3.028943693s to wait for apiserver process to appear ...
	I1210 06:47:08.048832   45161 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:47:08.048857   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:13.049671   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 06:47:13.049770   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:18.050944   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 06:47:18.050994   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:23.051524   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 06:47:23.051607   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:28.052404   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 06:47:28.052452   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:28.079423   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": read tcp 192.168.50.1:40044->192.168.50.121:8443: read: connection reset by peer
	I1210 06:47:28.549067   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:28.549840   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:29.049619   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:29.050409   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:29.549060   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:29.549787   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:30.049543   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:30.050268   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:30.549582   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:30.550219   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:31.048913   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:31.049733   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:31.548990   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:31.549755   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:32.049437   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:32.050088   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:32.549965   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:32.550747   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:33.049413   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:33.050069   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:33.549846   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:33.550512   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:34.049240   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:34.049963   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:34.549583   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:34.550327   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:35.048972   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:35.049675   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:35.549398   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:35.550069   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:36.049801   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:36.050440   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:36.549078   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:36.549844   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:37.049593   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:37.050235   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:37.549985   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:37.550643   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:38.049311   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:38.050015   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:38.549523   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:43.550290   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 06:47:43.550347   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:48.550622   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 06:47:48.550673   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:53.551467   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 06:47:53.551515   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:58.552912   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 06:47:58.552990   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:58.722271   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": read tcp 192.168.50.1:51560->192.168.50.121:8443: read: connection reset by peer
	I1210 06:47:59.049756   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:59.050468   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:47:59.549132   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:47:59.549876   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:00.049700   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:00.050448   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:00.549125   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:00.549925   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:01.049557   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:01.050326   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:01.549670   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:01.550382   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:02.049076   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:02.049895   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:02.549561   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:02.550256   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:03.049004   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:03.049805   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:03.549527   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:03.550245   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:04.048990   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:04.049643   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:04.549383   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:04.550151   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:05.049947   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:05.050599   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:05.549269   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:05.550042   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:06.049513   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:06.050178   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:06.549966   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:06.550678   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:07.049369   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:07.050137   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:07.550004   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:07.550822   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:08.049631   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:08.049763   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:08.094020   45161 cri.go:89] found id: "186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:08.094045   45161 cri.go:89] found id: ""
	I1210 06:48:08.094059   45161 logs.go:282] 1 containers: [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56]
	I1210 06:48:08.094124   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:08.099485   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:08.099563   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:08.142180   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:08.142212   45161 cri.go:89] found id: ""
	I1210 06:48:08.142222   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:08.142290   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:08.148314   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:08.148415   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:08.180918   45161 cri.go:89] found id: ""
	I1210 06:48:08.180947   45161 logs.go:282] 0 containers: []
	W1210 06:48:08.180959   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:08.180969   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:08.181038   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:08.223442   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:08.223474   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:08.223478   45161 cri.go:89] found id: ""
	I1210 06:48:08.223485   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:08.223537   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:08.228338   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:08.233076   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:08.233157   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:08.267803   45161 cri.go:89] found id: ""
	I1210 06:48:08.267826   45161 logs.go:282] 0 containers: []
	W1210 06:48:08.267834   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:08.267840   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:08.267894   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:08.306714   45161 cri.go:89] found id: "520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:08.306749   45161 cri.go:89] found id: ""
	I1210 06:48:08.306767   45161 logs.go:282] 1 containers: [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4]
	I1210 06:48:08.306835   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:08.311721   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:08.311797   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:08.343825   45161 cri.go:89] found id: ""
	I1210 06:48:08.343854   45161 logs.go:282] 0 containers: []
	W1210 06:48:08.343865   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:08.343872   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:08.343941   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:08.378813   45161 cri.go:89] found id: ""
	I1210 06:48:08.378881   45161 logs.go:282] 0 containers: []
	W1210 06:48:08.378893   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:08.378910   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:48:08.378923   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:48:08.424208   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:48:08.424239   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:48:08.442918   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:48:08.442950   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:48:08.532538   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:48:08.532566   45161 logs.go:123] Gathering logs for kube-apiserver [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56] ...
	I1210 06:48:08.532577   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:08.573730   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:48:08.573760   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:08.609734   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:48:08.609760   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:08.648276   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:08.648309   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:48:08.992134   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:48:08.992169   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:48:09.091185   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:09.091222   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:09.143190   45161 logs.go:123] Gathering logs for kube-controller-manager [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4] ...
	I1210 06:48:09.143221   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:11.679339   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:11.679962   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:11.680015   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:11.680061   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:11.714829   45161 cri.go:89] found id: "186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:11.714857   45161 cri.go:89] found id: ""
	I1210 06:48:11.714868   45161 logs.go:282] 1 containers: [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56]
	I1210 06:48:11.714941   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:11.719896   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:11.719968   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:11.757179   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:11.757210   45161 cri.go:89] found id: ""
	I1210 06:48:11.757220   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:11.757276   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:11.761547   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:11.761608   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:11.795110   45161 cri.go:89] found id: ""
	I1210 06:48:11.795144   45161 logs.go:282] 0 containers: []
	W1210 06:48:11.795155   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:11.795163   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:11.795223   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:11.828508   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:11.828538   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:11.828546   45161 cri.go:89] found id: ""
	I1210 06:48:11.828555   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:11.828634   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:11.833214   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:11.837541   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:11.837623   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:11.870699   45161 cri.go:89] found id: ""
	I1210 06:48:11.870741   45161 logs.go:282] 0 containers: []
	W1210 06:48:11.870762   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:11.870771   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:11.870862   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:11.906567   45161 cri.go:89] found id: "520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:11.906598   45161 cri.go:89] found id: ""
	I1210 06:48:11.906608   45161 logs.go:282] 1 containers: [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4]
	I1210 06:48:11.906679   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:11.911037   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:11.911140   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:11.943603   45161 cri.go:89] found id: ""
	I1210 06:48:11.943624   45161 logs.go:282] 0 containers: []
	W1210 06:48:11.943632   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:11.943639   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:11.943725   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:11.975957   45161 cri.go:89] found id: ""
	I1210 06:48:11.975981   45161 logs.go:282] 0 containers: []
	W1210 06:48:11.975989   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:11.976004   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:48:11.976016   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:48:11.991055   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:48:11.991080   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:48:12.059500   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:48:12.059520   45161 logs.go:123] Gathering logs for kube-apiserver [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56] ...
	I1210 06:48:12.059534   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:12.095226   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:48:12.095258   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:12.130768   45161 logs.go:123] Gathering logs for kube-controller-manager [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4] ...
	I1210 06:48:12.130812   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:12.165777   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:12.165802   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:48:12.416527   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:48:12.416565   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:48:12.460018   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:48:12.460051   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:48:12.551574   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:12.551617   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:12.598994   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:48:12.599031   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:15.133006   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:15.133778   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:15.133841   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:15.133897   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:15.170045   45161 cri.go:89] found id: "186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:15.170082   45161 cri.go:89] found id: ""
	I1210 06:48:15.170094   45161 logs.go:282] 1 containers: [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56]
	I1210 06:48:15.170170   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:15.174734   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:15.174830   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:15.208445   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:15.208470   45161 cri.go:89] found id: ""
	I1210 06:48:15.208480   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:15.208541   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:15.213307   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:15.213393   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:15.246337   45161 cri.go:89] found id: ""
	I1210 06:48:15.246383   45161 logs.go:282] 0 containers: []
	W1210 06:48:15.246394   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:15.246400   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:15.246451   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:15.280203   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:15.280232   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:15.280239   45161 cri.go:89] found id: ""
	I1210 06:48:15.280247   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:15.280302   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:15.284991   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:15.289166   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:15.289230   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:15.324866   45161 cri.go:89] found id: ""
	I1210 06:48:15.324892   45161 logs.go:282] 0 containers: []
	W1210 06:48:15.324901   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:15.324907   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:15.324962   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:15.359504   45161 cri.go:89] found id: "520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:15.359527   45161 cri.go:89] found id: ""
	I1210 06:48:15.359533   45161 logs.go:282] 1 containers: [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4]
	I1210 06:48:15.359594   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:15.364016   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:15.364087   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:15.394312   45161 cri.go:89] found id: ""
	I1210 06:48:15.394347   45161 logs.go:282] 0 containers: []
	W1210 06:48:15.394377   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:15.394385   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:15.394440   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:15.429405   45161 cri.go:89] found id: ""
	I1210 06:48:15.429431   45161 logs.go:282] 0 containers: []
	W1210 06:48:15.429439   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:15.429453   45161 logs.go:123] Gathering logs for kube-apiserver [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56] ...
	I1210 06:48:15.429464   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:15.466868   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:48:15.466896   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:15.502685   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:48:15.502715   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:15.537541   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:15.537574   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:48:15.848797   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:48:15.848828   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:48:15.889483   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:48:15.889519   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:48:15.905573   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:15.905601   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:15.955182   45161 logs.go:123] Gathering logs for kube-controller-manager [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4] ...
	I1210 06:48:15.955220   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:15.991211   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:48:15.991258   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:48:16.089723   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:48:16.089760   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:48:16.175111   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:48:18.675208   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:18.675971   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:18.676033   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:18.676092   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:18.719244   45161 cri.go:89] found id: "186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:18.719274   45161 cri.go:89] found id: ""
	I1210 06:48:18.719284   45161 logs.go:282] 1 containers: [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56]
	I1210 06:48:18.719383   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:18.723833   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:18.723898   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:18.758508   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:18.758532   45161 cri.go:89] found id: ""
	I1210 06:48:18.758540   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:18.758592   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:18.762870   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:18.762953   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:18.801183   45161 cri.go:89] found id: ""
	I1210 06:48:18.801210   45161 logs.go:282] 0 containers: []
	W1210 06:48:18.801219   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:18.801224   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:18.801283   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:18.836381   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:18.836408   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:18.836412   45161 cri.go:89] found id: ""
	I1210 06:48:18.836420   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:18.836482   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:18.840984   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:18.845345   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:18.845426   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:18.878983   45161 cri.go:89] found id: ""
	I1210 06:48:18.879012   45161 logs.go:282] 0 containers: []
	W1210 06:48:18.879020   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:18.879028   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:18.879090   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:18.912909   45161 cri.go:89] found id: "520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:18.912937   45161 cri.go:89] found id: ""
	I1210 06:48:18.912947   45161 logs.go:282] 1 containers: [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4]
	I1210 06:48:18.913004   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:18.917539   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:18.917616   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:18.949705   45161 cri.go:89] found id: ""
	I1210 06:48:18.949744   45161 logs.go:282] 0 containers: []
	W1210 06:48:18.949759   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:18.949768   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:18.949840   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:18.989342   45161 cri.go:89] found id: ""
	I1210 06:48:18.989388   45161 logs.go:282] 0 containers: []
	W1210 06:48:18.989399   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:18.989415   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:48:18.989429   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:48:19.058923   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:48:19.058949   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:19.058964   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:19.105438   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:48:19.105477   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:19.143494   45161 logs.go:123] Gathering logs for kube-controller-manager [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4] ...
	I1210 06:48:19.143537   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:19.180705   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:19.180733   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:48:19.433259   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:48:19.433311   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:48:19.473331   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:48:19.473375   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:48:19.565133   45161 logs.go:123] Gathering logs for kube-apiserver [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56] ...
	I1210 06:48:19.565168   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:19.608337   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:48:19.608383   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:19.646980   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:48:19.647012   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:48:22.164419   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:22.165077   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:22.165124   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:22.165173   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:22.217433   45161 cri.go:89] found id: "186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:22.217455   45161 cri.go:89] found id: ""
	I1210 06:48:22.217461   45161 logs.go:282] 1 containers: [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56]
	I1210 06:48:22.217513   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:22.222139   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:22.222230   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:22.274455   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:22.274483   45161 cri.go:89] found id: ""
	I1210 06:48:22.274566   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:22.274647   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:22.282561   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:22.282649   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:22.320753   45161 cri.go:89] found id: ""
	I1210 06:48:22.320782   45161 logs.go:282] 0 containers: []
	W1210 06:48:22.320800   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:22.320807   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:22.320871   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:22.359967   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:22.359997   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:22.360005   45161 cri.go:89] found id: ""
	I1210 06:48:22.360015   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:22.360077   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:22.364816   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:22.369164   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:22.369220   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:22.405967   45161 cri.go:89] found id: ""
	I1210 06:48:22.405996   45161 logs.go:282] 0 containers: []
	W1210 06:48:22.406004   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:22.406009   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:22.406070   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:22.445582   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:22.445606   45161 cri.go:89] found id: "520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:22.445610   45161 cri.go:89] found id: ""
	I1210 06:48:22.445616   45161 logs.go:282] 2 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4]
	I1210 06:48:22.445668   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:22.450624   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:22.454893   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:22.454952   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:22.490336   45161 cri.go:89] found id: ""
	I1210 06:48:22.490371   45161 logs.go:282] 0 containers: []
	W1210 06:48:22.490380   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:22.490385   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:22.490444   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:22.533449   45161 cri.go:89] found id: ""
	I1210 06:48:22.533474   45161 logs.go:282] 0 containers: []
	W1210 06:48:22.533481   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:22.533489   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:48:22.533503   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:48:22.646621   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:48:22.646657   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:48:22.663377   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:48:22.663413   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:22.703670   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:48:22.703702   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:22.740065   45161 logs.go:123] Gathering logs for kube-controller-manager [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4] ...
	I1210 06:48:22.740096   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:22.776619   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:22.776644   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:48:23.008316   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:48:23.008371   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:48:23.047856   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:48:23.047889   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:48:23.118639   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:48:23.118662   45161 logs.go:123] Gathering logs for kube-apiserver [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56] ...
	I1210 06:48:23.118675   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:23.161508   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:23.161540   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:23.208676   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:48:23.208714   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:25.744904   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:30.747752   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 06:48:30.747846   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:30.747918   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:30.786129   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:30.786157   45161 cri.go:89] found id: "186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:30.786163   45161 cri.go:89] found id: ""
	I1210 06:48:30.786172   45161 logs.go:282] 2 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2 186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56]
	I1210 06:48:30.786244   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:30.792003   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:30.797150   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:30.797227   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:30.838387   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:30.838413   45161 cri.go:89] found id: ""
	I1210 06:48:30.838422   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:30.838519   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:30.843116   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:30.843203   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:30.877836   45161 cri.go:89] found id: ""
	I1210 06:48:30.877870   45161 logs.go:282] 0 containers: []
	W1210 06:48:30.877882   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:30.877890   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:30.877958   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:30.914635   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:30.914664   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:30.914671   45161 cri.go:89] found id: ""
	I1210 06:48:30.914680   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:30.914760   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:30.919403   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:30.923971   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:30.924069   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:30.961264   45161 cri.go:89] found id: ""
	I1210 06:48:30.961298   45161 logs.go:282] 0 containers: []
	W1210 06:48:30.961312   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:30.961326   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:30.961408   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:30.994655   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:30.994685   45161 cri.go:89] found id: "520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:30.994692   45161 cri.go:89] found id: ""
	I1210 06:48:30.994702   45161 logs.go:282] 2 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4]
	I1210 06:48:30.994788   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:31.000675   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:31.006232   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:31.006306   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:31.044427   45161 cri.go:89] found id: ""
	I1210 06:48:31.044460   45161 logs.go:282] 0 containers: []
	W1210 06:48:31.044471   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:31.044478   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:31.044551   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:31.081247   45161 cri.go:89] found id: ""
	I1210 06:48:31.081273   45161 logs.go:282] 0 containers: []
	W1210 06:48:31.081283   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:31.081295   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:48:31.081313   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:31.131103   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:31.131141   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:31.203160   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:48:31.203204   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:31.246105   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:48:31.246139   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:31.290600   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:48:31.290632   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:31.329070   45161 logs.go:123] Gathering logs for kube-controller-manager [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4] ...
	I1210 06:48:31.329103   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:31.368329   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:48:31.368384   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:48:31.385740   45161 logs.go:123] Gathering logs for kube-apiserver [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56] ...
	I1210 06:48:31.385772   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:31.433247   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:31.433280   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:48:31.721398   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:48:31.721433   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:48:31.766783   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:48:31.766813   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:48:31.884525   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:48:31.884572   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 06:48:41.968255   45161 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.083658374s)
	W1210 06:48:41.968297   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1210 06:48:44.468479   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:44.577911   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": read tcp 192.168.50.1:42934->192.168.50.121:8443: read: connection reset by peer
	I1210 06:48:44.578005   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:44.578070   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:44.631422   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:44.631449   45161 cri.go:89] found id: "186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:44.631454   45161 cri.go:89] found id: ""
	I1210 06:48:44.631464   45161 logs.go:282] 2 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2 186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56]
	I1210 06:48:44.631529   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:44.636846   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:44.641200   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:44.641272   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:44.679545   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:44.679572   45161 cri.go:89] found id: ""
	I1210 06:48:44.679582   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:44.679633   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:44.685183   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:44.685248   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:44.722472   45161 cri.go:89] found id: ""
	I1210 06:48:44.722502   45161 logs.go:282] 0 containers: []
	W1210 06:48:44.722510   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:44.722516   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:44.722591   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:44.762313   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:44.762339   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:44.762345   45161 cri.go:89] found id: ""
	I1210 06:48:44.762367   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:44.762434   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:44.767409   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:44.771873   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:44.771946   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:44.812081   45161 cri.go:89] found id: ""
	I1210 06:48:44.812111   45161 logs.go:282] 0 containers: []
	W1210 06:48:44.812124   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:44.812131   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:44.812195   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:44.847552   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:44.847587   45161 cri.go:89] found id: "520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	I1210 06:48:44.847594   45161 cri.go:89] found id: ""
	I1210 06:48:44.847606   45161 logs.go:282] 2 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4]
	I1210 06:48:44.847687   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:44.853329   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:44.857751   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:44.857822   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:44.891835   45161 cri.go:89] found id: ""
	I1210 06:48:44.891866   45161 logs.go:282] 0 containers: []
	W1210 06:48:44.891879   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:44.891887   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:44.891955   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:44.932117   45161 cri.go:89] found id: ""
	I1210 06:48:44.932149   45161 logs.go:282] 0 containers: []
	W1210 06:48:44.932160   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:44.932172   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:48:44.932187   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:48:45.007179   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:48:45.007211   45161 logs.go:123] Gathering logs for kube-apiserver [186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56] ...
	I1210 06:48:45.007227   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186ae98c7258e10ac897dc5c888e35105b667654f4c643a6c892252720653f56"
	I1210 06:48:45.050671   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:45.050700   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:45.115172   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:48:45.115209   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:45.170818   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:45.170853   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:48:45.552508   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:48:45.552542   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:48:45.604661   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:48:45.604706   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:48:45.745299   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:48:45.745365   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:48:45.761280   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:48:45.761313   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:45.815521   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:48:45.815561   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:45.863954   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:48:45.863985   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:45.915912   45161 logs.go:123] Gathering logs for kube-controller-manager [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4] ...
	I1210 06:48:45.915959   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	W1210 06:48:45.950387   45161 logs.go:130] failed kube-controller-manager [520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:48:45.944396    4297 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4\": container with ID starting with 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4 not found: ID does not exist" containerID="520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	time="2025-12-10T06:48:45Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4\": container with ID starting with 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1210 06:48:45.944396    4297 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4\": container with ID starting with 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4 not found: ID does not exist" containerID="520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4"
	time="2025-12-10T06:48:45Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4\": container with ID starting with 520747c3eee45c4c2312243ed28a445430967fb99b5550c37b56a5a4fb73bcc4 not found: ID does not exist"
	
	** /stderr **
	I1210 06:48:48.450612   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:48.451290   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:48.451384   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:48.451452   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:48.492098   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:48.492128   45161 cri.go:89] found id: ""
	I1210 06:48:48.492137   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:48:48.492206   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:48.497302   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:48.497411   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:48.535750   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:48.535777   45161 cri.go:89] found id: ""
	I1210 06:48:48.535786   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:48.535859   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:48.541949   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:48.542030   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:48.578253   45161 cri.go:89] found id: ""
	I1210 06:48:48.578283   45161 logs.go:282] 0 containers: []
	W1210 06:48:48.578293   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:48.578300   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:48.578382   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:48.617455   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:48.617489   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:48.617496   45161 cri.go:89] found id: ""
	I1210 06:48:48.617505   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:48.617578   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:48.623779   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:48.628553   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:48.628646   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:48.669918   45161 cri.go:89] found id: ""
	I1210 06:48:48.669946   45161 logs.go:282] 0 containers: []
	W1210 06:48:48.669957   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:48.669964   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:48.670034   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:48.711063   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:48.711093   45161 cri.go:89] found id: ""
	I1210 06:48:48.711104   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:48:48.711172   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:48.716986   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:48.717064   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:48.759693   45161 cri.go:89] found id: ""
	I1210 06:48:48.759720   45161 logs.go:282] 0 containers: []
	W1210 06:48:48.759727   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:48.759732   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:48.759789   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:48.802060   45161 cri.go:89] found id: ""
	I1210 06:48:48.802092   45161 logs.go:282] 0 containers: []
	W1210 06:48:48.802100   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:48.802117   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:48:48.802131   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:48.839340   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:48:48.839387   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:48.874609   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:48:48.874639   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:48:48.915385   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:48:48.915415   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:48:48.996989   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:48:48.997008   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:48:48.997023   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:49.041040   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:49.041081   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:49.096118   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:48:49.096155   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:49.136313   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:49.136367   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:48:49.503700   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:48:49.503752   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:48:49.617483   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:48:49.617524   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:48:52.135846   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:52.136726   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:52.136792   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:52.136859   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:52.183089   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:52.183118   45161 cri.go:89] found id: ""
	I1210 06:48:52.183126   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:48:52.183195   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:52.188407   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:52.188478   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:52.234754   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:52.234789   45161 cri.go:89] found id: ""
	I1210 06:48:52.234798   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:52.234864   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:52.239850   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:52.239932   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:52.287470   45161 cri.go:89] found id: ""
	I1210 06:48:52.287499   45161 logs.go:282] 0 containers: []
	W1210 06:48:52.287514   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:52.287522   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:52.287578   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:52.330050   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:52.330093   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:52.330100   45161 cri.go:89] found id: ""
	I1210 06:48:52.330109   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:52.330176   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:52.335291   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:52.339793   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:52.339912   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:52.379962   45161 cri.go:89] found id: ""
	I1210 06:48:52.379987   45161 logs.go:282] 0 containers: []
	W1210 06:48:52.379995   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:52.380000   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:52.380063   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:52.421180   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:52.421203   45161 cri.go:89] found id: ""
	I1210 06:48:52.421212   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:48:52.421272   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:52.425927   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:52.426004   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:52.468650   45161 cri.go:89] found id: ""
	I1210 06:48:52.468675   45161 logs.go:282] 0 containers: []
	W1210 06:48:52.468686   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:52.468694   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:52.468779   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:52.507512   45161 cri.go:89] found id: ""
	I1210 06:48:52.507539   45161 logs.go:282] 0 containers: []
	W1210 06:48:52.507550   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:52.507565   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:48:52.507579   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:52.552834   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:52.552873   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:48:52.865193   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:48:52.865227   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:48:52.967111   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:48:52.967147   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:48:52.985369   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:48:52.985398   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:48:53.066644   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:48:53.066677   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:53.066695   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:53.127251   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:48:53.127291   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:48:53.180280   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:48:53.180315   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:53.229058   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:48:53.229091   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:53.271901   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:48:53.271931   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:55.808506   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:55.809201   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:55.809273   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:55.809340   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:55.852591   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:55.852620   45161 cri.go:89] found id: ""
	I1210 06:48:55.852628   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:48:55.852699   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:55.857638   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:55.857729   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:55.898748   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:55.898784   45161 cri.go:89] found id: ""
	I1210 06:48:55.898795   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:55.898871   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:55.904177   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:55.904272   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:55.946743   45161 cri.go:89] found id: ""
	I1210 06:48:55.946775   45161 logs.go:282] 0 containers: []
	W1210 06:48:55.946794   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:55.946804   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:55.946876   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:55.983830   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:55.983859   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:55.983867   45161 cri.go:89] found id: ""
	I1210 06:48:55.983878   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:55.983939   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:55.988930   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:55.993795   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:55.993862   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:56.043964   45161 cri.go:89] found id: ""
	I1210 06:48:56.043996   45161 logs.go:282] 0 containers: []
	W1210 06:48:56.044008   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:56.044016   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:56.044082   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:56.084188   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:56.084222   45161 cri.go:89] found id: ""
	I1210 06:48:56.084233   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:48:56.084303   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:56.088826   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:56.088915   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:56.127122   45161 cri.go:89] found id: ""
	I1210 06:48:56.127168   45161 logs.go:282] 0 containers: []
	W1210 06:48:56.127184   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:56.127193   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:56.127276   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:56.165159   45161 cri.go:89] found id: ""
	I1210 06:48:56.165194   45161 logs.go:282] 0 containers: []
	W1210 06:48:56.165204   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:56.165222   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:48:56.165245   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:48:56.205170   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:48:56.205204   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:48:56.279237   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:48:56.279270   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:48:56.279288   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:56.319408   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:56.319445   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:56.365686   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:48:56.365761   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:48:56.470476   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:48:56.470513   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:48:56.489596   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:48:56.489625   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:56.524200   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:48:56.524228   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:56.558613   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:48:56.558643   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:56.592048   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:56.592077   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:48:59.350008   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:48:59.350826   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:48:59.350891   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:48:59.350940   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:48:59.387163   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:59.387194   45161 cri.go:89] found id: ""
	I1210 06:48:59.387204   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:48:59.387300   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:59.392564   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:48:59.392643   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:48:59.435217   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:59.435256   45161 cri.go:89] found id: ""
	I1210 06:48:59.435266   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:48:59.435336   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:59.440149   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:48:59.440217   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:48:59.473602   45161 cri.go:89] found id: ""
	I1210 06:48:59.473626   45161 logs.go:282] 0 containers: []
	W1210 06:48:59.473635   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:48:59.473640   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:48:59.473691   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:48:59.513169   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:48:59.513204   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:48:59.513211   45161 cri.go:89] found id: ""
	I1210 06:48:59.513221   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:48:59.513289   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:59.518658   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:59.523477   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:48:59.523566   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:48:59.560944   45161 cri.go:89] found id: ""
	I1210 06:48:59.560971   45161 logs.go:282] 0 containers: []
	W1210 06:48:59.560982   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:48:59.560988   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:48:59.561055   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:48:59.602712   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:48:59.602737   45161 cri.go:89] found id: ""
	I1210 06:48:59.602748   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:48:59.602815   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:48:59.607951   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:48:59.608026   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:48:59.643373   45161 cri.go:89] found id: ""
	I1210 06:48:59.643407   45161 logs.go:282] 0 containers: []
	W1210 06:48:59.643417   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:48:59.643423   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:48:59.643489   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:48:59.684968   45161 cri.go:89] found id: ""
	I1210 06:48:59.684998   45161 logs.go:282] 0 containers: []
	W1210 06:48:59.685010   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:48:59.685029   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:48:59.685045   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:48:59.732351   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:48:59.732397   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:48:59.793764   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:48:59.793801   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:00.056043   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:00.056084   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:00.103122   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:00.103166   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:00.141809   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:00.141847   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:00.183831   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:00.183875   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:00.222538   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:00.222569   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:00.308545   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:00.308586   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:00.324534   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:00.324564   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:00.394133   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:02.894326   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:02.895104   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:02.895154   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:02.895215   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:02.938747   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:02.938769   45161 cri.go:89] found id: ""
	I1210 06:49:02.938776   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:02.938839   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:02.945254   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:02.945346   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:02.988713   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:02.988745   45161 cri.go:89] found id: ""
	I1210 06:49:02.988755   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:02.988812   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:02.994597   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:02.994686   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:03.036043   45161 cri.go:89] found id: ""
	I1210 06:49:03.036071   45161 logs.go:282] 0 containers: []
	W1210 06:49:03.036079   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:03.036084   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:03.036152   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:03.080730   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:03.080769   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:03.080777   45161 cri.go:89] found id: ""
	I1210 06:49:03.080786   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:03.080854   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:03.085694   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:03.090040   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:03.090118   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:03.129860   45161 cri.go:89] found id: ""
	I1210 06:49:03.129888   45161 logs.go:282] 0 containers: []
	W1210 06:49:03.129900   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:03.129909   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:03.129977   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:03.172567   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:03.172609   45161 cri.go:89] found id: ""
	I1210 06:49:03.172620   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:03.172693   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:03.178033   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:03.178104   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:03.217890   45161 cri.go:89] found id: ""
	I1210 06:49:03.217915   45161 logs.go:282] 0 containers: []
	W1210 06:49:03.217925   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:03.217941   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:03.217996   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:03.256035   45161 cri.go:89] found id: ""
	I1210 06:49:03.256060   45161 logs.go:282] 0 containers: []
	W1210 06:49:03.256068   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:03.256082   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:03.256092   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:03.298369   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:03.298408   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:03.355635   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:03.355668   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:03.403698   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:03.403729   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:03.441214   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:03.441247   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:03.479758   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:03.479799   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:03.782836   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:03.782874   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:03.826408   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:03.826443   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:03.965765   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:03.965807   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:03.984477   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:03.984510   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:04.063580   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:06.564447   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:06.565194   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:06.565271   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:06.565349   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:06.618288   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:06.618317   45161 cri.go:89] found id: ""
	I1210 06:49:06.618326   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:06.618412   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:06.625141   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:06.625224   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:06.672629   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:06.672659   45161 cri.go:89] found id: ""
	I1210 06:49:06.672670   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:06.672740   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:06.680320   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:06.680422   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:06.725388   45161 cri.go:89] found id: ""
	I1210 06:49:06.725418   45161 logs.go:282] 0 containers: []
	W1210 06:49:06.725430   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:06.725438   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:06.725505   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:06.772388   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:06.772419   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:06.772426   45161 cri.go:89] found id: ""
	I1210 06:49:06.772440   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:06.772526   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:06.778796   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:06.784765   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:06.784857   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:06.834979   45161 cri.go:89] found id: ""
	I1210 06:49:06.835014   45161 logs.go:282] 0 containers: []
	W1210 06:49:06.835030   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:06.835039   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:06.835111   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:06.887562   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:06.887600   45161 cri.go:89] found id: ""
	I1210 06:49:06.887612   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:06.887683   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:06.894517   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:06.894600   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:06.939839   45161 cri.go:89] found id: ""
	I1210 06:49:06.939873   45161 logs.go:282] 0 containers: []
	W1210 06:49:06.939884   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:06.939891   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:06.939957   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:06.989514   45161 cri.go:89] found id: ""
	I1210 06:49:06.989548   45161 logs.go:282] 0 containers: []
	W1210 06:49:06.989561   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:06.989582   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:06.989641   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:07.035850   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:07.035894   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:07.079226   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:07.079268   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:07.443147   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:07.443196   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:07.493418   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:07.493462   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:07.603023   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:07.603064   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:07.694775   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:07.694801   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:07.694818   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:07.743695   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:07.743748   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:07.798560   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:07.798613   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:07.821462   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:07.821498   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:10.403503   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:10.404170   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:10.404235   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:10.404297   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:10.446112   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:10.446145   45161 cri.go:89] found id: ""
	I1210 06:49:10.446156   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:10.446230   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:10.450843   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:10.450936   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:10.493378   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:10.493407   45161 cri.go:89] found id: ""
	I1210 06:49:10.493417   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:10.493477   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:10.499459   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:10.499540   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:10.542116   45161 cri.go:89] found id: ""
	I1210 06:49:10.542143   45161 logs.go:282] 0 containers: []
	W1210 06:49:10.542151   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:10.542161   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:10.542231   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:10.577202   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:10.577229   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:10.577236   45161 cri.go:89] found id: ""
	I1210 06:49:10.577245   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:10.577314   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:10.582234   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:10.587333   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:10.587417   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:10.627643   45161 cri.go:89] found id: ""
	I1210 06:49:10.627681   45161 logs.go:282] 0 containers: []
	W1210 06:49:10.627692   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:10.627699   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:10.627798   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:10.674893   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:10.674920   45161 cri.go:89] found id: ""
	I1210 06:49:10.674929   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:10.674994   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:10.680824   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:10.680910   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:10.726773   45161 cri.go:89] found id: ""
	I1210 06:49:10.726809   45161 logs.go:282] 0 containers: []
	W1210 06:49:10.726822   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:10.726831   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:10.726899   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:10.764186   45161 cri.go:89] found id: ""
	I1210 06:49:10.764218   45161 logs.go:282] 0 containers: []
	W1210 06:49:10.764229   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:10.764249   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:10.764263   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:10.783886   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:10.783919   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:10.826250   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:10.826285   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:10.893849   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:10.893898   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:10.942883   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:10.942914   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:11.040060   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:11.040100   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:11.117842   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:11.117874   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:11.117890   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:11.156512   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:11.156551   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:11.194584   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:11.194624   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:11.230379   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:11.230410   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:13.985242   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:13.986001   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:13.986076   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:13.986150   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:14.028605   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:14.028636   45161 cri.go:89] found id: ""
	I1210 06:49:14.028652   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:14.028715   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:14.035298   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:14.035418   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:14.082183   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:14.082215   45161 cri.go:89] found id: ""
	I1210 06:49:14.082225   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:14.082290   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:14.088420   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:14.088524   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:14.129817   45161 cri.go:89] found id: ""
	I1210 06:49:14.129859   45161 logs.go:282] 0 containers: []
	W1210 06:49:14.129871   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:14.129878   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:14.129952   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:14.172294   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:14.172322   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:14.172330   45161 cri.go:89] found id: ""
	I1210 06:49:14.172338   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:14.172422   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:14.178226   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:14.185170   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:14.185251   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:14.232584   45161 cri.go:89] found id: ""
	I1210 06:49:14.232622   45161 logs.go:282] 0 containers: []
	W1210 06:49:14.232643   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:14.232654   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:14.232730   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:14.273215   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:14.273251   45161 cri.go:89] found id: ""
	I1210 06:49:14.273263   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:14.273325   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:14.279943   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:14.280018   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:14.322328   45161 cri.go:89] found id: ""
	I1210 06:49:14.322383   45161 logs.go:282] 0 containers: []
	W1210 06:49:14.322395   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:14.322404   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:14.322474   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:14.369160   45161 cri.go:89] found id: ""
	I1210 06:49:14.369190   45161 logs.go:282] 0 containers: []
	W1210 06:49:14.369201   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:14.369216   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:14.369230   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:14.424423   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:14.424470   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:14.472186   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:14.472230   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:14.511196   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:14.511227   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:14.552264   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:14.552298   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:14.994529   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:14.994574   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:15.094131   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:15.094175   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:15.111136   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:15.111172   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:15.213916   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:15.213951   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:15.213966   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:15.269346   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:15.269392   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:17.826068   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:17.826874   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:17.826943   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:17.827000   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:17.882500   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:17.882526   45161 cri.go:89] found id: ""
	I1210 06:49:17.882538   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:17.882600   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:17.888877   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:17.888960   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:17.938940   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:17.938965   45161 cri.go:89] found id: ""
	I1210 06:49:17.939065   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:17.939264   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:17.945637   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:17.945751   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:17.991323   45161 cri.go:89] found id: ""
	I1210 06:49:17.991350   45161 logs.go:282] 0 containers: []
	W1210 06:49:17.991382   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:17.991390   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:17.991454   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:18.043019   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:18.043057   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:18.043062   45161 cri.go:89] found id: ""
	I1210 06:49:18.043071   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:18.043132   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:18.049079   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:18.054720   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:18.054802   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:18.104579   45161 cri.go:89] found id: ""
	I1210 06:49:18.104611   45161 logs.go:282] 0 containers: []
	W1210 06:49:18.104623   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:18.104631   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:18.104698   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:18.151710   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:18.151737   45161 cri.go:89] found id: ""
	I1210 06:49:18.151745   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:18.151806   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:18.159387   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:18.159473   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:18.207955   45161 cri.go:89] found id: ""
	I1210 06:49:18.207982   45161 logs.go:282] 0 containers: []
	W1210 06:49:18.207995   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:18.208026   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:18.208229   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:18.255887   45161 cri.go:89] found id: ""
	I1210 06:49:18.255917   45161 logs.go:282] 0 containers: []
	W1210 06:49:18.255928   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:18.255946   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:18.255960   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:18.277780   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:18.277815   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:18.370942   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:18.370984   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:18.371000   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:18.425875   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:18.425918   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:18.474118   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:18.474157   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:18.523339   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:18.523406   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:18.567476   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:18.567519   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:18.969973   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:18.970032   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:19.116820   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:19.116871   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:19.176101   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:19.176138   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:21.720443   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:21.721152   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:21.721223   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:21.721287   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:21.766512   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:21.766554   45161 cri.go:89] found id: ""
	I1210 06:49:21.766564   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:21.766626   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:21.772872   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:21.772970   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:21.811317   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:21.811341   45161 cri.go:89] found id: ""
	I1210 06:49:21.811349   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:21.811450   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:21.817886   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:21.817982   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:21.860578   45161 cri.go:89] found id: ""
	I1210 06:49:21.860610   45161 logs.go:282] 0 containers: []
	W1210 06:49:21.860623   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:21.860630   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:21.860697   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:21.900084   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:21.900114   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:21.900122   45161 cri.go:89] found id: ""
	I1210 06:49:21.900133   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:21.900216   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:21.905169   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:21.910033   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:21.910101   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:21.950644   45161 cri.go:89] found id: ""
	I1210 06:49:21.950674   45161 logs.go:282] 0 containers: []
	W1210 06:49:21.950686   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:21.950693   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:21.950774   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:21.991839   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:21.991868   45161 cri.go:89] found id: ""
	I1210 06:49:21.991878   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:21.991944   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:21.998581   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:21.998672   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:22.035559   45161 cri.go:89] found id: ""
	I1210 06:49:22.035593   45161 logs.go:282] 0 containers: []
	W1210 06:49:22.035605   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:22.035614   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:22.035679   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:22.075783   45161 cri.go:89] found id: ""
	I1210 06:49:22.075812   45161 logs.go:282] 0 containers: []
	W1210 06:49:22.075823   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:22.075840   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:22.075860   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:22.131097   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:22.131142   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:22.172104   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:22.172131   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:22.214420   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:22.214465   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:22.266645   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:22.266685   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:22.366155   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:22.366182   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:22.366197   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:22.408143   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:22.408192   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:22.455308   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:22.455350   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:22.740850   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:22.740891   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:22.839535   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:22.839583   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:25.360889   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:25.361631   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:25.361692   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:25.361751   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:25.401273   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:25.401305   45161 cri.go:89] found id: ""
	I1210 06:49:25.401314   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:25.401413   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:25.406716   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:25.406809   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:25.442789   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:25.442820   45161 cri.go:89] found id: ""
	I1210 06:49:25.442829   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:25.442891   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:25.448615   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:25.448704   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:25.494013   45161 cri.go:89] found id: ""
	I1210 06:49:25.494034   45161 logs.go:282] 0 containers: []
	W1210 06:49:25.494041   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:25.494047   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:25.494093   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:25.534255   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:25.534275   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:25.534282   45161 cri.go:89] found id: ""
	I1210 06:49:25.534294   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:25.534350   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:25.539835   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:25.545940   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:25.545995   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:25.586805   45161 cri.go:89] found id: ""
	I1210 06:49:25.586836   45161 logs.go:282] 0 containers: []
	W1210 06:49:25.586845   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:25.586853   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:25.586911   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:25.630127   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:25.630156   45161 cri.go:89] found id: ""
	I1210 06:49:25.630166   45161 logs.go:282] 1 containers: [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:25.630234   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:25.636433   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:25.636499   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:25.679719   45161 cri.go:89] found id: ""
	I1210 06:49:25.679754   45161 logs.go:282] 0 containers: []
	W1210 06:49:25.679764   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:25.679778   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:25.679840   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:25.721158   45161 cri.go:89] found id: ""
	I1210 06:49:25.721191   45161 logs.go:282] 0 containers: []
	W1210 06:49:25.721203   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:25.721219   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:25.721231   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:25.762006   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:25.762050   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:25.815932   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:25.815975   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:25.933278   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:25.933313   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:25.986071   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:25.986111   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:26.020905   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:26.020940   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:26.057947   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:26.057990   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:26.328211   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:26.328249   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:26.344523   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:26.344558   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:26.420577   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:26.420604   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:26.420617   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:28.978259   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:28.979078   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:28.979142   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:28.979208   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:29.032457   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:29.032545   45161 cri.go:89] found id: ""
	I1210 06:49:29.032562   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:29.032631   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:29.039212   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:29.039300   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:29.082098   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:29.082127   45161 cri.go:89] found id: ""
	I1210 06:49:29.082139   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:29.082209   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:29.087640   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:29.087745   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:29.124022   45161 cri.go:89] found id: ""
	I1210 06:49:29.124051   45161 logs.go:282] 0 containers: []
	W1210 06:49:29.124062   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:29.124070   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:29.124140   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:29.167204   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:29.167239   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:29.167246   45161 cri.go:89] found id: ""
	I1210 06:49:29.167256   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:29.167328   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:29.172381   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:29.177829   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:29.177913   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:29.241605   45161 cri.go:89] found id: ""
	I1210 06:49:29.241635   45161 logs.go:282] 0 containers: []
	W1210 06:49:29.241647   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:29.241655   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:29.241735   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:29.296089   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:49:29.296124   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:29.296132   45161 cri.go:89] found id: ""
	I1210 06:49:29.296142   45161 logs.go:282] 2 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:29.296220   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:29.304790   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:29.312395   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:29.312488   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:29.361286   45161 cri.go:89] found id: ""
	I1210 06:49:29.361319   45161 logs.go:282] 0 containers: []
	W1210 06:49:29.361331   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:29.361338   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:29.361422   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:29.409523   45161 cri.go:89] found id: ""
	I1210 06:49:29.409553   45161 logs.go:282] 0 containers: []
	W1210 06:49:29.409564   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:29.409576   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:29.409590   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:29.557259   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:29.557323   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:29.582346   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:29.582408   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:29.667200   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:29.667254   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:29.725303   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:29.725346   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:29.773602   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:49:29.773662   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:49:29.825235   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:29.825284   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:29.881189   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:29.881241   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:30.188849   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:30.188896   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:30.272644   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:30.272672   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:30.272689   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:30.317143   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:30.317187   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:32.866200   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:32.866940   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:32.867006   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:32.867058   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:32.908260   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:32.908287   45161 cri.go:89] found id: ""
	I1210 06:49:32.908297   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:32.908368   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:32.913806   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:32.913883   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:32.951820   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:32.951851   45161 cri.go:89] found id: ""
	I1210 06:49:32.951862   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:32.951941   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:32.958062   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:32.958145   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:32.994068   45161 cri.go:89] found id: ""
	I1210 06:49:32.994099   45161 logs.go:282] 0 containers: []
	W1210 06:49:32.994110   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:32.994119   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:32.994186   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:33.028152   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:33.028178   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:33.028184   45161 cri.go:89] found id: ""
	I1210 06:49:33.028192   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:33.028263   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:33.033451   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:33.038334   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:33.038427   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:33.081564   45161 cri.go:89] found id: ""
	I1210 06:49:33.081594   45161 logs.go:282] 0 containers: []
	W1210 06:49:33.081605   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:33.081613   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:33.081677   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:33.124869   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:49:33.124903   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:33.124910   45161 cri.go:89] found id: ""
	I1210 06:49:33.124918   45161 logs.go:282] 2 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:33.124985   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:33.130673   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:33.136905   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:33.136982   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:33.174345   45161 cri.go:89] found id: ""
	I1210 06:49:33.174389   45161 logs.go:282] 0 containers: []
	W1210 06:49:33.174401   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:33.174409   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:33.174472   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:33.214630   45161 cri.go:89] found id: ""
	I1210 06:49:33.214659   45161 logs.go:282] 0 containers: []
	W1210 06:49:33.214669   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:33.214681   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:33.214696   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:33.231383   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:33.231413   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:33.274077   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:33.274111   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:33.314155   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:49:33.314193   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:49:33.354299   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:33.354332   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:33.389279   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:33.389321   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:33.666487   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:33.666528   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:33.718651   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:33.718695   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:33.816001   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:33.816035   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:33.895065   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:33.895092   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:33.895111   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:33.955576   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:33.955618   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:36.498424   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:36.499098   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:36.499160   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:36.499221   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:36.549793   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:36.549833   45161 cri.go:89] found id: ""
	I1210 06:49:36.549842   45161 logs.go:282] 1 containers: [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:36.549919   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:36.555454   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:36.555528   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:36.609393   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:36.609422   45161 cri.go:89] found id: ""
	I1210 06:49:36.609433   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:36.609511   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:36.616546   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:36.616658   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:36.669190   45161 cri.go:89] found id: ""
	I1210 06:49:36.669220   45161 logs.go:282] 0 containers: []
	W1210 06:49:36.669229   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:36.669234   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:36.669304   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:36.711787   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:36.711817   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:36.711824   45161 cri.go:89] found id: ""
	I1210 06:49:36.711832   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:36.711906   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:36.716862   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:36.723161   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:36.723246   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:36.762847   45161 cri.go:89] found id: ""
	I1210 06:49:36.762886   45161 logs.go:282] 0 containers: []
	W1210 06:49:36.762897   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:36.762905   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:36.762975   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:36.813068   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:49:36.813098   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:36.813113   45161 cri.go:89] found id: ""
	I1210 06:49:36.813122   45161 logs.go:282] 2 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:36.813192   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:36.819990   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:36.824646   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:36.824735   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:36.872932   45161 cri.go:89] found id: ""
	I1210 06:49:36.872963   45161 logs.go:282] 0 containers: []
	W1210 06:49:36.872976   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:36.872984   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:36.873047   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:36.915951   45161 cri.go:89] found id: ""
	I1210 06:49:36.915985   45161 logs.go:282] 0 containers: []
	W1210 06:49:36.915993   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:36.916003   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:36.916029   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:36.933950   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:36.933987   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:37.012786   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:37.012817   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:37.012840   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:37.080609   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:37.080655   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:37.115201   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:49:37.115233   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:49:37.153884   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:37.153913   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:37.195204   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:37.195235   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:37.461578   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:37.461614   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:37.506071   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:37.506110   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:37.622560   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:37.622613   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:37.680204   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:37.680249   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:40.226457   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:45.227568   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 06:49:45.227629   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:45.227708   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:45.264637   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:49:45.264660   45161 cri.go:89] found id: "4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:45.264666   45161 cri.go:89] found id: ""
	I1210 06:49:45.264674   45161 logs.go:282] 2 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2]
	I1210 06:49:45.264729   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:45.269493   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:45.274441   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:45.274519   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:45.317542   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:45.317571   45161 cri.go:89] found id: ""
	I1210 06:49:45.317580   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:45.317637   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:45.323593   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:45.323677   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:45.364091   45161 cri.go:89] found id: ""
	I1210 06:49:45.364115   45161 logs.go:282] 0 containers: []
	W1210 06:49:45.364123   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:45.364130   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:45.364198   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:45.405126   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:45.405150   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:45.405154   45161 cri.go:89] found id: ""
	I1210 06:49:45.405160   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:45.405208   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:45.410113   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:45.415423   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:45.415504   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:45.461427   45161 cri.go:89] found id: ""
	I1210 06:49:45.461456   45161 logs.go:282] 0 containers: []
	W1210 06:49:45.461467   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:45.461474   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:45.461540   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:45.503394   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:49:45.503421   45161 cri.go:89] found id: "156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:45.503428   45161 cri.go:89] found id: ""
	I1210 06:49:45.503437   45161 logs.go:282] 2 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc]
	I1210 06:49:45.503502   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:45.509431   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:45.514187   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:45.514264   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:45.555341   45161 cri.go:89] found id: ""
	I1210 06:49:45.555388   45161 logs.go:282] 0 containers: []
	W1210 06:49:45.555399   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:45.555407   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:45.555463   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:45.592750   45161 cri.go:89] found id: ""
	I1210 06:49:45.592779   45161 logs.go:282] 0 containers: []
	W1210 06:49:45.592789   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:45.592810   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:45.592825   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 06:49:55.679821   45161 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.08697156s)
	W1210 06:49:55.679866   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1210 06:49:55.679876   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:49:55.679889   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:49:55.732716   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:55.732763   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:55.768820   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:49:55.768863   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:49:55.801988   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:55.802025   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:49:56.175996   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:56.176030   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:56.222601   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:56.222653   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:56.242042   45161 logs.go:123] Gathering logs for kube-apiserver [4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2] ...
	I1210 06:49:56.242075   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f86b7d1339a14e95d3852074831b7fae04736d6d1b5e2f3ffcd56b19d07e0d2"
	I1210 06:49:56.287938   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:49:56.287977   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:56.342398   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:56.342434   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:56.386692   45161 logs.go:123] Gathering logs for kube-controller-manager [156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc] ...
	I1210 06:49:56.386726   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 156bafa42e5548f5e81c7aad66aef822dd4f8431e08aaff34d2dd32ee5d745fc"
	I1210 06:49:56.425970   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:56.426010   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:59.071133   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:49:59.071772   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:49:59.071834   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:49:59.071877   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:49:59.127272   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:49:59.127300   45161 cri.go:89] found id: ""
	I1210 06:49:59.127310   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:49:59.127384   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:59.132620   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:49:59.132699   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:49:59.172042   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:49:59.172064   45161 cri.go:89] found id: ""
	I1210 06:49:59.172072   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:49:59.172119   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:59.176049   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:49:59.176101   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:49:59.208718   45161 cri.go:89] found id: ""
	I1210 06:49:59.208748   45161 logs.go:282] 0 containers: []
	W1210 06:49:59.208756   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:49:59.208761   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:49:59.208816   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:49:59.243613   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:59.243646   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:59.243654   45161 cri.go:89] found id: ""
	I1210 06:49:59.243662   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:49:59.243765   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:59.248212   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:59.252445   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:49:59.252510   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:49:59.292053   45161 cri.go:89] found id: ""
	I1210 06:49:59.292078   45161 logs.go:282] 0 containers: []
	W1210 06:49:59.292086   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:49:59.292094   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:49:59.292152   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:49:59.328145   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:49:59.328183   45161 cri.go:89] found id: ""
	I1210 06:49:59.328210   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:49:59.328286   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:49:59.332764   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:49:59.332856   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:49:59.371948   45161 cri.go:89] found id: ""
	I1210 06:49:59.371984   45161 logs.go:282] 0 containers: []
	W1210 06:49:59.371997   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:49:59.372004   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:49:59.372077   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:49:59.404016   45161 cri.go:89] found id: ""
	I1210 06:49:59.404044   45161 logs.go:282] 0 containers: []
	W1210 06:49:59.404056   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:49:59.404073   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:49:59.404093   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:49:59.436857   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:49:59.436888   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:49:59.477040   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:49:59.477068   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:49:59.576419   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:49:59.576471   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:49:59.600439   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:49:59.600495   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:49:59.683084   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:49:59.683113   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:49:59.683129   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:49:59.729631   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:49:59.729665   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:49:59.764663   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:49:59.764692   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:49:59.805316   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:49:59.805365   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:00.182483   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:00.182532   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:02.743183   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:02.745212   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:02.745287   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:02.745373   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:02.782796   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:02.782825   45161 cri.go:89] found id: ""
	I1210 06:50:02.782832   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:02.782891   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:02.787760   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:02.787851   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:02.824641   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:02.824671   45161 cri.go:89] found id: ""
	I1210 06:50:02.824680   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:02.824747   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:02.830754   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:02.830821   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:02.875227   45161 cri.go:89] found id: ""
	I1210 06:50:02.875261   45161 logs.go:282] 0 containers: []
	W1210 06:50:02.875274   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:02.875282   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:02.875371   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:02.910085   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:02.910114   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:02.910121   45161 cri.go:89] found id: ""
	I1210 06:50:02.910133   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:02.910208   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:02.916180   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:02.920680   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:02.920752   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:02.958306   45161 cri.go:89] found id: ""
	I1210 06:50:02.958338   45161 logs.go:282] 0 containers: []
	W1210 06:50:02.958350   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:02.958367   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:02.958430   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:02.999346   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:02.999390   45161 cri.go:89] found id: ""
	I1210 06:50:02.999398   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:02.999470   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:03.004348   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:03.004440   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:03.044130   45161 cri.go:89] found id: ""
	I1210 06:50:03.044167   45161 logs.go:282] 0 containers: []
	W1210 06:50:03.044178   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:03.044185   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:03.044246   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:03.087090   45161 cri.go:89] found id: ""
	I1210 06:50:03.087114   45161 logs.go:282] 0 containers: []
	W1210 06:50:03.087121   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:03.087135   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:03.087149   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:03.411831   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:03.411868   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:03.428736   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:03.428773   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:03.480243   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:03.480294   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:03.527914   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:03.527950   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:03.572814   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:03.572845   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:03.618500   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:03.618538   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:03.663274   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:03.663318   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:03.767092   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:03.767132   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:03.838324   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:03.838366   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:03.838382   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:06.374092   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:06.374739   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:06.374800   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:06.374855   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:06.409087   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:06.409114   45161 cri.go:89] found id: ""
	I1210 06:50:06.409122   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:06.409174   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:06.413476   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:06.413529   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:06.446236   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:06.446267   45161 cri.go:89] found id: ""
	I1210 06:50:06.446280   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:06.446349   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:06.450854   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:06.450926   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:06.488963   45161 cri.go:89] found id: ""
	I1210 06:50:06.488997   45161 logs.go:282] 0 containers: []
	W1210 06:50:06.489009   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:06.489016   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:06.489080   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:06.527537   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:06.527559   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:06.527563   45161 cri.go:89] found id: ""
	I1210 06:50:06.527569   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:06.527624   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:06.532154   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:06.537116   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:06.537188   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:06.571040   45161 cri.go:89] found id: ""
	I1210 06:50:06.571070   45161 logs.go:282] 0 containers: []
	W1210 06:50:06.571083   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:06.571092   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:06.571156   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:06.607935   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:06.607966   45161 cri.go:89] found id: ""
	I1210 06:50:06.607976   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:06.608040   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:06.612746   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:06.612819   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:06.649249   45161 cri.go:89] found id: ""
	I1210 06:50:06.649288   45161 logs.go:282] 0 containers: []
	W1210 06:50:06.649299   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:06.649307   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:06.649399   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:06.682664   45161 cri.go:89] found id: ""
	I1210 06:50:06.682702   45161 logs.go:282] 0 containers: []
	W1210 06:50:06.682714   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:06.682733   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:06.682746   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:06.972864   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:06.972902   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:07.019734   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:07.019768   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:07.126703   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:07.126740   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:07.171340   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:07.171383   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:07.225956   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:07.225987   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:07.246048   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:07.246085   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:07.322976   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:07.323016   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:07.323031   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:07.360130   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:07.360165   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:07.395252   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:07.395281   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:09.939102   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:09.939786   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:09.939848   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:09.939909   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:09.977832   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:09.977875   45161 cri.go:89] found id: ""
	I1210 06:50:09.977886   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:09.977958   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:09.982801   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:09.982872   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:10.022292   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:10.022319   45161 cri.go:89] found id: ""
	I1210 06:50:10.022327   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:10.022404   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:10.026763   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:10.026837   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:10.065919   45161 cri.go:89] found id: ""
	I1210 06:50:10.065953   45161 logs.go:282] 0 containers: []
	W1210 06:50:10.065965   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:10.065973   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:10.066044   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:10.107113   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:10.107144   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:10.107165   45161 cri.go:89] found id: ""
	I1210 06:50:10.107176   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:10.107244   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:10.112984   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:10.117763   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:10.117847   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:10.160242   45161 cri.go:89] found id: ""
	I1210 06:50:10.160270   45161 logs.go:282] 0 containers: []
	W1210 06:50:10.160280   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:10.160287   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:10.160378   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:10.194216   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:10.194246   45161 cri.go:89] found id: ""
	I1210 06:50:10.194256   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:10.194319   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:10.199014   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:10.199084   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:10.234437   45161 cri.go:89] found id: ""
	I1210 06:50:10.234469   45161 logs.go:282] 0 containers: []
	W1210 06:50:10.234480   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:10.234488   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:10.234559   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:10.269240   45161 cri.go:89] found id: ""
	I1210 06:50:10.269273   45161 logs.go:282] 0 containers: []
	W1210 06:50:10.269284   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:10.269303   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:10.269323   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:10.307185   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:10.307214   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:10.344741   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:10.344781   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:10.363070   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:10.363100   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:10.401350   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:10.401398   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:10.671917   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:10.671969   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:10.718699   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:10.718741   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:10.811319   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:10.811369   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:10.884554   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:10.884583   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:10.884595   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:10.940964   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:10.941000   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:13.486586   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:13.487246   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:13.487294   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:13.487388   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:13.521432   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:13.521454   45161 cri.go:89] found id: ""
	I1210 06:50:13.521461   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:13.521522   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:13.525819   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:13.525876   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:13.559107   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:13.559134   45161 cri.go:89] found id: ""
	I1210 06:50:13.559143   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:13.559205   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:13.563905   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:13.563971   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:13.598957   45161 cri.go:89] found id: ""
	I1210 06:50:13.598988   45161 logs.go:282] 0 containers: []
	W1210 06:50:13.598999   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:13.599007   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:13.599070   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:13.638110   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:13.638131   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:13.638135   45161 cri.go:89] found id: ""
	I1210 06:50:13.638142   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:13.638199   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:13.642893   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:13.648099   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:13.648185   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:13.682305   45161 cri.go:89] found id: ""
	I1210 06:50:13.682338   45161 logs.go:282] 0 containers: []
	W1210 06:50:13.682350   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:13.682370   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:13.682442   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:13.716568   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:13.716598   45161 cri.go:89] found id: ""
	I1210 06:50:13.716608   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:13.716669   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:13.721135   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:13.721206   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:13.754623   45161 cri.go:89] found id: ""
	I1210 06:50:13.754651   45161 logs.go:282] 0 containers: []
	W1210 06:50:13.754662   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:13.754668   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:13.754745   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:13.788699   45161 cri.go:89] found id: ""
	I1210 06:50:13.788736   45161 logs.go:282] 0 containers: []
	W1210 06:50:13.788746   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:13.788765   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:13.788780   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:13.804899   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:13.804929   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:13.843461   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:13.843491   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:13.874478   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:13.874511   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:13.909126   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:13.909156   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:13.948946   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:13.948974   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:13.991176   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:13.991212   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:14.085008   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:14.085046   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:14.161511   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:14.161533   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:14.161549   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:14.216493   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:14.216528   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:16.993469   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:16.994080   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:16.994164   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:16.994214   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:17.034704   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:17.034731   45161 cri.go:89] found id: ""
	I1210 06:50:17.034740   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:17.034808   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:17.039254   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:17.039321   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:17.069276   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:17.069306   45161 cri.go:89] found id: ""
	I1210 06:50:17.069315   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:17.069396   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:17.074087   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:17.074143   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:17.114685   45161 cri.go:89] found id: ""
	I1210 06:50:17.114718   45161 logs.go:282] 0 containers: []
	W1210 06:50:17.114729   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:17.114736   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:17.114803   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:17.147977   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:17.148003   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:17.148009   45161 cri.go:89] found id: ""
	I1210 06:50:17.148019   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:17.148093   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:17.153131   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:17.157389   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:17.157469   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:17.191654   45161 cri.go:89] found id: ""
	I1210 06:50:17.191680   45161 logs.go:282] 0 containers: []
	W1210 06:50:17.191692   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:17.191700   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:17.191785   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:17.227245   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:17.227277   45161 cri.go:89] found id: ""
	I1210 06:50:17.227288   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:17.227380   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:17.232161   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:17.232238   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:17.265943   45161 cri.go:89] found id: ""
	I1210 06:50:17.265973   45161 logs.go:282] 0 containers: []
	W1210 06:50:17.265982   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:17.265987   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:17.266048   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:17.304420   45161 cri.go:89] found id: ""
	I1210 06:50:17.304444   45161 logs.go:282] 0 containers: []
	W1210 06:50:17.304452   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:17.304466   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:17.304476   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:17.342460   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:17.342489   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:17.433951   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:17.433989   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:17.506965   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:17.506996   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:17.507022   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:17.771087   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:17.771127   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:17.786907   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:17.786948   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:17.826052   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:17.826087   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:17.876030   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:17.876065   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:17.908617   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:17.908652   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:17.947212   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:17.947241   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:20.487083   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:20.487711   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:20.487770   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:20.487819   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:20.522820   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:20.522841   45161 cri.go:89] found id: ""
	I1210 06:50:20.522848   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:20.522898   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:20.527861   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:20.528038   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:20.561127   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:20.561151   45161 cri.go:89] found id: ""
	I1210 06:50:20.561159   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:20.561219   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:20.565431   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:20.565506   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:20.599953   45161 cri.go:89] found id: ""
	I1210 06:50:20.599980   45161 logs.go:282] 0 containers: []
	W1210 06:50:20.599990   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:20.599997   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:20.600063   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:20.635581   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:20.635611   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:20.635616   45161 cri.go:89] found id: ""
	I1210 06:50:20.635623   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:20.635682   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:20.640726   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:20.645516   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:20.645604   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:20.680974   45161 cri.go:89] found id: ""
	I1210 06:50:20.681006   45161 logs.go:282] 0 containers: []
	W1210 06:50:20.681017   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:20.681024   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:20.681095   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:20.712704   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:20.712740   45161 cri.go:89] found id: ""
	I1210 06:50:20.712749   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:20.712819   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:20.717315   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:20.717396   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:20.750587   45161 cri.go:89] found id: ""
	I1210 06:50:20.750612   45161 logs.go:282] 0 containers: []
	W1210 06:50:20.750620   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:20.750625   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:20.750683   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:20.784296   45161 cri.go:89] found id: ""
	I1210 06:50:20.784328   45161 logs.go:282] 0 containers: []
	W1210 06:50:20.784345   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:20.784376   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:20.784392   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:20.800242   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:20.800270   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:20.869891   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:20.869911   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:20.869926   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:20.908264   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:20.908296   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:20.965188   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:20.965217   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:21.008128   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:21.008163   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:21.049937   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:21.049968   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:21.388867   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:21.388918   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:21.495767   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:21.495805   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:21.531527   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:21.531557   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:24.068044   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:24.068945   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:24.069010   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:24.069068   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:24.118565   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:24.118594   45161 cri.go:89] found id: ""
	I1210 06:50:24.118604   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:24.118675   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:24.123403   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:24.123485   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:24.162931   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:24.162967   45161 cri.go:89] found id: ""
	I1210 06:50:24.162976   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:24.163042   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:24.167341   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:24.167428   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:24.211662   45161 cri.go:89] found id: ""
	I1210 06:50:24.211691   45161 logs.go:282] 0 containers: []
	W1210 06:50:24.211701   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:24.211708   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:24.211773   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:24.256606   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:24.256638   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:24.256645   45161 cri.go:89] found id: ""
	I1210 06:50:24.256654   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:24.256737   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:24.262296   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:24.267636   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:24.267695   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:24.310013   45161 cri.go:89] found id: ""
	I1210 06:50:24.310042   45161 logs.go:282] 0 containers: []
	W1210 06:50:24.310052   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:24.310060   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:24.310152   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:24.358016   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:24.358041   45161 cri.go:89] found id: ""
	I1210 06:50:24.358051   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:24.358129   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:24.364640   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:24.364722   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:24.408333   45161 cri.go:89] found id: ""
	I1210 06:50:24.408384   45161 logs.go:282] 0 containers: []
	W1210 06:50:24.408397   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:24.408414   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:24.408479   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:24.441926   45161 cri.go:89] found id: ""
	I1210 06:50:24.441957   45161 logs.go:282] 0 containers: []
	W1210 06:50:24.441968   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:24.441984   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:24.442001   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:24.484026   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:24.484066   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:24.531487   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:24.531519   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:24.565540   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:24.565567   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:24.645346   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:24.645395   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:24.645415   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:24.682735   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:24.682771   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:24.726711   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:24.726747   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:25.009233   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:25.009268   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:25.057409   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:25.057442   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:25.165709   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:25.165748   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:27.684801   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:27.685447   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:27.685524   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:27.685584   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:27.723659   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:27.723686   45161 cri.go:89] found id: ""
	I1210 06:50:27.723697   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:27.723773   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:27.728410   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:27.728486   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:27.766602   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:27.766631   45161 cri.go:89] found id: ""
	I1210 06:50:27.766641   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:27.766711   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:27.771097   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:27.771185   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:27.805312   45161 cri.go:89] found id: ""
	I1210 06:50:27.805335   45161 logs.go:282] 0 containers: []
	W1210 06:50:27.805343   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:27.805349   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:27.805416   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:27.838835   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:27.838865   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:27.838869   45161 cri.go:89] found id: ""
	I1210 06:50:27.838875   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:27.838923   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:27.843913   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:27.848114   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:27.848178   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:27.879435   45161 cri.go:89] found id: ""
	I1210 06:50:27.879462   45161 logs.go:282] 0 containers: []
	W1210 06:50:27.879469   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:27.879477   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:27.879540   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:27.913649   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:27.913669   45161 cri.go:89] found id: ""
	I1210 06:50:27.913675   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:27.913722   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:27.918568   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:27.918636   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:27.952278   45161 cri.go:89] found id: ""
	I1210 06:50:27.952300   45161 logs.go:282] 0 containers: []
	W1210 06:50:27.952308   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:27.952316   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:27.952385   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:27.991491   45161 cri.go:89] found id: ""
	I1210 06:50:27.991519   45161 logs.go:282] 0 containers: []
	W1210 06:50:27.991529   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:27.991545   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:27.991558   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:28.024782   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:28.024811   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:28.059834   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:28.059874   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:28.092507   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:28.092537   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:28.365831   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:28.365868   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:28.454807   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:28.454843   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:28.493386   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:28.493417   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:28.540102   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:28.540137   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:28.577419   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:28.577452   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:28.592936   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:28.592967   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:28.667283   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:31.168925   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:31.169631   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:31.169685   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:31.169745   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:31.208477   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:31.208499   45161 cri.go:89] found id: ""
	I1210 06:50:31.208507   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:31.208580   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:31.213327   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:31.213418   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:31.246544   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:31.246577   45161 cri.go:89] found id: ""
	I1210 06:50:31.246588   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:31.246650   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:31.251612   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:31.251685   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:31.285398   45161 cri.go:89] found id: ""
	I1210 06:50:31.285435   45161 logs.go:282] 0 containers: []
	W1210 06:50:31.285449   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:31.285464   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:31.285532   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:31.318973   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:31.319002   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:31.319009   45161 cri.go:89] found id: ""
	I1210 06:50:31.319019   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:31.319083   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:31.323468   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:31.328084   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:31.328140   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:31.360566   45161 cri.go:89] found id: ""
	I1210 06:50:31.360603   45161 logs.go:282] 0 containers: []
	W1210 06:50:31.360615   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:31.360622   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:31.360686   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:31.399477   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:31.399505   45161 cri.go:89] found id: ""
	I1210 06:50:31.399515   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:31.399578   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:31.404762   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:31.404938   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:31.448584   45161 cri.go:89] found id: ""
	I1210 06:50:31.448611   45161 logs.go:282] 0 containers: []
	W1210 06:50:31.448622   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:31.448630   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:31.448690   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:31.489882   45161 cri.go:89] found id: ""
	I1210 06:50:31.489910   45161 logs.go:282] 0 containers: []
	W1210 06:50:31.489921   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:31.489936   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:31.489951   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:31.586277   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:31.586301   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:31.586317   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:31.631071   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:31.631111   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:31.691034   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:31.691072   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:31.731742   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:31.731789   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:31.781018   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:31.781060   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:31.802227   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:31.802254   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:31.844498   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:31.844542   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:31.883162   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:31.883198   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:32.236648   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:32.236697   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:34.839867   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:34.840624   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:34.840686   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:34.840735   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:34.872906   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:34.872931   45161 cri.go:89] found id: ""
	I1210 06:50:34.872939   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:34.872993   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:34.877464   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:34.877542   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:34.912819   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:34.912860   45161 cri.go:89] found id: ""
	I1210 06:50:34.912881   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:34.912958   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:34.917675   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:34.917748   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:34.949963   45161 cri.go:89] found id: ""
	I1210 06:50:34.949992   45161 logs.go:282] 0 containers: []
	W1210 06:50:34.950003   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:34.950011   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:34.950080   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:34.988225   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:34.988250   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:34.988255   45161 cri.go:89] found id: ""
	I1210 06:50:34.988262   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:34.988326   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:34.994654   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:34.999132   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:34.999211   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:35.035712   45161 cri.go:89] found id: ""
	I1210 06:50:35.035740   45161 logs.go:282] 0 containers: []
	W1210 06:50:35.035751   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:35.035758   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:35.035819   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:35.069035   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:35.069061   45161 cri.go:89] found id: ""
	I1210 06:50:35.069081   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:35.069154   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:35.074104   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:35.074189   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:35.110957   45161 cri.go:89] found id: ""
	I1210 06:50:35.110980   45161 logs.go:282] 0 containers: []
	W1210 06:50:35.110987   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:35.110992   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:35.111043   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:35.144042   45161 cri.go:89] found id: ""
	I1210 06:50:35.144075   45161 logs.go:282] 0 containers: []
	W1210 06:50:35.144083   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:35.144100   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:35.144113   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:35.238282   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:35.238319   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:35.255523   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:35.255567   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:35.298960   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:35.298992   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:35.369902   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:35.369929   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:35.369944   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:35.418594   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:35.418633   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:35.457902   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:35.457935   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:35.492576   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:35.492614   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:35.527183   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:35.527219   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:35.787042   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:35.787077   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:38.326437   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:38.327108   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:38.327171   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:38.327247   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:38.372029   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:38.372055   45161 cri.go:89] found id: ""
	I1210 06:50:38.372070   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:38.372138   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:38.378230   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:38.378312   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:38.419302   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:38.419329   45161 cri.go:89] found id: ""
	I1210 06:50:38.419339   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:38.419427   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:38.425020   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:38.425098   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:38.468008   45161 cri.go:89] found id: ""
	I1210 06:50:38.468028   45161 logs.go:282] 0 containers: []
	W1210 06:50:38.468040   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:38.468045   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:38.468083   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:38.508292   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:38.508322   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:38.508328   45161 cri.go:89] found id: ""
	I1210 06:50:38.508336   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:38.508408   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:38.513702   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:38.518678   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:38.518771   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:38.551025   45161 cri.go:89] found id: ""
	I1210 06:50:38.551057   45161 logs.go:282] 0 containers: []
	W1210 06:50:38.551068   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:38.551075   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:38.551145   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:38.586511   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:38.586534   45161 cri.go:89] found id: ""
	I1210 06:50:38.586543   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:38.586603   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:38.592314   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:38.592428   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:38.633809   45161 cri.go:89] found id: ""
	I1210 06:50:38.633849   45161 logs.go:282] 0 containers: []
	W1210 06:50:38.633861   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:38.633869   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:38.633929   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:38.673773   45161 cri.go:89] found id: ""
	I1210 06:50:38.673814   45161 logs.go:282] 0 containers: []
	W1210 06:50:38.673825   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:38.673840   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:38.673854   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:38.719803   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:38.719836   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:38.770716   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:38.770751   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:38.805690   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:38.805727   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:38.845846   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:38.845875   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:38.866324   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:38.866375   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:38.909261   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:38.909284   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:38.953471   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:38.953509   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:39.291479   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:39.291523   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:39.412730   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:39.412774   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:39.511009   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:42.012421   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:42.012988   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:42.013157   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:42.013225   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:42.055493   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:42.055513   45161 cri.go:89] found id: ""
	I1210 06:50:42.055523   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:42.055569   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:42.061684   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:42.061823   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:42.102307   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:42.102329   45161 cri.go:89] found id: ""
	I1210 06:50:42.102336   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:42.102412   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:42.107717   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:42.107799   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:42.149801   45161 cri.go:89] found id: ""
	I1210 06:50:42.149828   45161 logs.go:282] 0 containers: []
	W1210 06:50:42.149838   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:42.149847   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:42.149908   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:42.189064   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:42.189096   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:42.189102   45161 cri.go:89] found id: ""
	I1210 06:50:42.189111   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:42.189173   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:42.194880   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:42.200596   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:42.200661   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:42.238066   45161 cri.go:89] found id: ""
	I1210 06:50:42.238102   45161 logs.go:282] 0 containers: []
	W1210 06:50:42.238113   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:42.238133   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:42.238197   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:42.274697   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:42.274721   45161 cri.go:89] found id: ""
	I1210 06:50:42.274729   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:42.274789   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:42.280007   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:42.280091   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:42.318386   45161 cri.go:89] found id: ""
	I1210 06:50:42.318411   45161 logs.go:282] 0 containers: []
	W1210 06:50:42.318421   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:42.318429   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:42.318507   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:42.359418   45161 cri.go:89] found id: ""
	I1210 06:50:42.359443   45161 logs.go:282] 0 containers: []
	W1210 06:50:42.359453   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:42.359469   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:42.359485   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:42.407372   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:42.407406   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:42.469687   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:42.469732   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:42.506813   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:42.506838   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:42.556009   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:42.556033   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:42.574742   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:42.574788   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:42.656246   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:42.656278   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:42.656294   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:42.697244   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:42.697285   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:42.737865   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:42.737903   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:43.051843   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:43.051901   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:45.663595   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:45.664211   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:45.664277   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:45.664333   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:45.707332   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:45.707368   45161 cri.go:89] found id: ""
	I1210 06:50:45.707378   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:45.707445   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:45.713602   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:45.713679   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:45.763796   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:45.763832   45161 cri.go:89] found id: ""
	I1210 06:50:45.763843   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:45.763903   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:45.770597   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:45.770651   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:45.812733   45161 cri.go:89] found id: ""
	I1210 06:50:45.812758   45161 logs.go:282] 0 containers: []
	W1210 06:50:45.812768   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:45.812774   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:45.812826   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:45.852257   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:45.852290   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:45.852296   45161 cri.go:89] found id: ""
	I1210 06:50:45.852305   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:45.852392   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:45.857749   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:45.862610   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:45.862688   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:45.897786   45161 cri.go:89] found id: ""
	I1210 06:50:45.897817   45161 logs.go:282] 0 containers: []
	W1210 06:50:45.897825   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:45.897833   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:45.897920   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:45.940670   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:45.940700   45161 cri.go:89] found id: ""
	I1210 06:50:45.940712   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:45.940774   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:45.946142   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:45.946200   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:45.987572   45161 cri.go:89] found id: ""
	I1210 06:50:45.987601   45161 logs.go:282] 0 containers: []
	W1210 06:50:45.987612   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:45.987619   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:45.987674   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:46.028827   45161 cri.go:89] found id: ""
	I1210 06:50:46.028853   45161 logs.go:282] 0 containers: []
	W1210 06:50:46.028864   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:46.028880   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:46.028894   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:46.354224   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:46.354254   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:46.467395   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:46.467431   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:46.513391   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:46.513423   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:46.561093   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:46.561134   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:46.595540   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:46.595572   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:46.635643   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:46.635675   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:46.674701   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:46.674730   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:46.721540   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:46.721566   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:46.742929   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:46.742952   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:46.824933   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:49.326560   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:49.327246   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:49.327303   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:49.327349   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:49.360005   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:49.360026   45161 cri.go:89] found id: ""
	I1210 06:50:49.360034   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:49.360084   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:49.365906   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:49.365988   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:49.409710   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:49.409740   45161 cri.go:89] found id: ""
	I1210 06:50:49.409749   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:49.409814   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:49.414498   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:49.414572   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:49.452940   45161 cri.go:89] found id: ""
	I1210 06:50:49.452970   45161 logs.go:282] 0 containers: []
	W1210 06:50:49.452981   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:49.452988   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:49.453054   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:49.494350   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:49.494382   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:49.494386   45161 cri.go:89] found id: ""
	I1210 06:50:49.494393   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:49.494443   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:49.502287   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:49.506694   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:49.506779   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:49.545573   45161 cri.go:89] found id: ""
	I1210 06:50:49.545603   45161 logs.go:282] 0 containers: []
	W1210 06:50:49.545613   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:49.545620   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:49.545684   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:49.579730   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:49.579762   45161 cri.go:89] found id: ""
	I1210 06:50:49.579774   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:49.579842   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:49.584230   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:49.584294   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:49.623146   45161 cri.go:89] found id: ""
	I1210 06:50:49.623178   45161 logs.go:282] 0 containers: []
	W1210 06:50:49.623190   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:49.623198   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:49.623266   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:49.658543   45161 cri.go:89] found id: ""
	I1210 06:50:49.658567   45161 logs.go:282] 0 containers: []
	W1210 06:50:49.658574   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:49.658588   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:49.658600   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:49.764812   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:49.764850   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:49.841492   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:49.841515   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:49.841532   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:49.884581   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:49.884614   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:49.919614   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:49.919647   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:49.967440   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:49.967467   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:49.985516   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:49.985547   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:50.046479   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:50.046527   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:50.091911   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:50.091940   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:50.362719   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:50.362754   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:52.908211   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:52.908880   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:52.908929   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:52.908975   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:52.943289   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:52.943320   45161 cri.go:89] found id: ""
	I1210 06:50:52.943330   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:52.943414   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:52.947880   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:52.947951   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:52.986570   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:52.986600   45161 cri.go:89] found id: ""
	I1210 06:50:52.986610   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:52.986677   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:52.991916   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:52.991992   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:53.029162   45161 cri.go:89] found id: ""
	I1210 06:50:53.029187   45161 logs.go:282] 0 containers: []
	W1210 06:50:53.029195   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:53.029201   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:53.029263   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:53.069066   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:53.069089   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:53.069094   45161 cri.go:89] found id: ""
	I1210 06:50:53.069103   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:53.069170   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:53.073866   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:53.080204   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:53.080294   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:53.113641   45161 cri.go:89] found id: ""
	I1210 06:50:53.113675   45161 logs.go:282] 0 containers: []
	W1210 06:50:53.113684   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:53.113689   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:53.113742   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:53.153128   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:53.153155   45161 cri.go:89] found id: ""
	I1210 06:50:53.153163   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:53.153233   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:53.157800   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:53.157865   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:53.191457   45161 cri.go:89] found id: ""
	I1210 06:50:53.191491   45161 logs.go:282] 0 containers: []
	W1210 06:50:53.191499   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:53.191505   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:53.191559   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:53.224982   45161 cri.go:89] found id: ""
	I1210 06:50:53.225018   45161 logs.go:282] 0 containers: []
	W1210 06:50:53.225030   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:53.225049   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:53.225064   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:53.264936   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:53.264963   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:53.302486   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:53.302515   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:53.377238   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:53.377272   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:53.377289   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:53.420069   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:53.420108   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:53.459328   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:53.459372   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:53.494898   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:53.494928   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:53.750211   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:53.750260   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:53.855187   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:53.855228   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:53.870623   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:53.870656   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:56.417463   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:50:56.418161   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:50:56.418215   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:56.418263   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:56.454380   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:56.454404   45161 cri.go:89] found id: ""
	I1210 06:50:56.454413   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:50:56.454475   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:56.461153   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:50:56.461235   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:56.506233   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:56.506254   45161 cri.go:89] found id: ""
	I1210 06:50:56.506265   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:50:56.506328   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:56.511914   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:50:56.511998   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:56.553465   45161 cri.go:89] found id: ""
	I1210 06:50:56.553497   45161 logs.go:282] 0 containers: []
	W1210 06:50:56.553509   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:50:56.553516   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:56.553586   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:56.606279   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:56.606308   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:56.606316   45161 cri.go:89] found id: ""
	I1210 06:50:56.606325   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:50:56.606408   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:56.613987   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:56.620122   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:56.620207   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:56.666867   45161 cri.go:89] found id: ""
	I1210 06:50:56.666900   45161 logs.go:282] 0 containers: []
	W1210 06:50:56.666914   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:56.666922   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:56.666994   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:56.710510   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:56.710546   45161 cri.go:89] found id: ""
	I1210 06:50:56.710558   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:50:56.710635   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:50:56.715425   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:56.715527   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:56.750011   45161 cri.go:89] found id: ""
	I1210 06:50:56.750044   45161 logs.go:282] 0 containers: []
	W1210 06:50:56.750055   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:56.750066   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:56.750144   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:56.797822   45161 cri.go:89] found id: ""
	I1210 06:50:56.797859   45161 logs.go:282] 0 containers: []
	W1210 06:50:56.797873   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:56.797907   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:56.797938   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:56.819961   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:50:56.820007   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:50:56.865730   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:50:56.865792   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:56.912311   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:56.912348   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:56.996545   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:56.996588   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:50:56.996609   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:50:57.053588   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:50:57.053641   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:50:57.101002   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:50:57.101041   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:50:57.148231   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:50:57.148263   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:50:57.191288   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:50:57.191331   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:50:57.447136   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:57.447174   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:51:00.047898   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:51:00.048622   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:51:00.048742   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:51:00.048811   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:51:00.096935   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:51:00.096961   45161 cri.go:89] found id: ""
	I1210 06:51:00.096970   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:51:00.097070   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:00.102953   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:51:00.103033   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:51:00.146067   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:51:00.146094   45161 cri.go:89] found id: ""
	I1210 06:51:00.146103   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:51:00.146163   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:00.152646   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:51:00.152730   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:51:00.197327   45161 cri.go:89] found id: ""
	I1210 06:51:00.197377   45161 logs.go:282] 0 containers: []
	W1210 06:51:00.197390   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:51:00.197398   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:51:00.197467   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:51:00.246598   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:51:00.246624   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:51:00.246630   45161 cri.go:89] found id: ""
	I1210 06:51:00.246640   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:51:00.246710   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:00.253843   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:00.260565   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:51:00.260654   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:51:00.306130   45161 cri.go:89] found id: ""
	I1210 06:51:00.306156   45161 logs.go:282] 0 containers: []
	W1210 06:51:00.306166   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:51:00.306173   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:51:00.306245   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:51:00.349473   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:51:00.349509   45161 cri.go:89] found id: ""
	I1210 06:51:00.349539   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:51:00.349596   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:00.354202   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:51:00.354280   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:51:00.389159   45161 cri.go:89] found id: ""
	I1210 06:51:00.389189   45161 logs.go:282] 0 containers: []
	W1210 06:51:00.389201   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:51:00.389208   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:51:00.389274   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:51:00.424746   45161 cri.go:89] found id: ""
	I1210 06:51:00.424774   45161 logs.go:282] 0 containers: []
	W1210 06:51:00.424784   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:51:00.424800   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:51:00.424814   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:51:00.530846   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:51:00.530895   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:51:00.571338   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:51:00.571387   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:51:00.636298   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:51:00.636343   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:51:00.681872   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:51:00.681903   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:51:00.724855   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:51:00.724888   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:51:00.742574   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:51:00.742611   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:51:00.812780   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:51:00.812825   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:51:00.812842   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:51:00.848094   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:51:00.848123   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:51:01.102378   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:51:01.102423   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:51:03.641729   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:51:03.642461   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:51:03.642519   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:51:03.642580   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:51:03.676308   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:51:03.676327   45161 cri.go:89] found id: ""
	I1210 06:51:03.676334   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:51:03.676411   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:03.681112   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:51:03.681184   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:51:03.716558   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:51:03.716583   45161 cri.go:89] found id: ""
	I1210 06:51:03.716590   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:51:03.716642   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:03.721169   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:51:03.721242   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:51:03.756730   45161 cri.go:89] found id: ""
	I1210 06:51:03.756751   45161 logs.go:282] 0 containers: []
	W1210 06:51:03.756758   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:51:03.756764   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:51:03.756830   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:51:03.787177   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:51:03.787197   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:51:03.787201   45161 cri.go:89] found id: ""
	I1210 06:51:03.787208   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:51:03.787266   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:03.791463   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:03.795623   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:51:03.795687   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:51:03.828541   45161 cri.go:89] found id: ""
	I1210 06:51:03.828566   45161 logs.go:282] 0 containers: []
	W1210 06:51:03.828574   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:51:03.828581   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:51:03.828631   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:51:03.863179   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:51:03.863213   45161 cri.go:89] found id: ""
	I1210 06:51:03.863222   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:51:03.863275   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:03.867953   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:51:03.868027   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:51:03.901560   45161 cri.go:89] found id: ""
	I1210 06:51:03.901586   45161 logs.go:282] 0 containers: []
	W1210 06:51:03.901593   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:51:03.901598   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:51:03.901655   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:51:03.935097   45161 cri.go:89] found id: ""
	I1210 06:51:03.935124   45161 logs.go:282] 0 containers: []
	W1210 06:51:03.935131   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:51:03.935149   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:51:03.935163   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:51:04.003696   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:51:04.003722   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:51:04.003734   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:51:04.041426   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:51:04.041459   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:51:04.089413   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:51:04.089449   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:51:04.122421   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:51:04.122457   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:51:04.221125   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:51:04.221160   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:51:04.236872   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:51:04.236899   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:51:04.267431   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:51:04.267455   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:51:04.298118   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:51:04.298150   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:51:04.543161   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:51:04.543205   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:51:07.085467   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:51:07.086096   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:51:07.086156   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:51:07.086222   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:51:07.127713   45161 cri.go:89] found id: "5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:51:07.127748   45161 cri.go:89] found id: ""
	I1210 06:51:07.127759   45161 logs.go:282] 1 containers: [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6]
	I1210 06:51:07.127815   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:07.133384   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:51:07.133468   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:51:07.173969   45161 cri.go:89] found id: "522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:51:07.173990   45161 cri.go:89] found id: ""
	I1210 06:51:07.173999   45161 logs.go:282] 1 containers: [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6]
	I1210 06:51:07.174067   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:07.179042   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:51:07.179101   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:51:07.219116   45161 cri.go:89] found id: ""
	I1210 06:51:07.219146   45161 logs.go:282] 0 containers: []
	W1210 06:51:07.219157   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:51:07.219164   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:51:07.219229   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:51:07.253959   45161 cri.go:89] found id: "494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:51:07.253987   45161 cri.go:89] found id: "8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:51:07.253995   45161 cri.go:89] found id: ""
	I1210 06:51:07.254005   45161 logs.go:282] 2 containers: [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829]
	I1210 06:51:07.254078   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:07.258751   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:07.263061   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:51:07.263120   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:51:07.306080   45161 cri.go:89] found id: ""
	I1210 06:51:07.306105   45161 logs.go:282] 0 containers: []
	W1210 06:51:07.306116   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:51:07.306123   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:51:07.306173   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:51:07.353181   45161 cri.go:89] found id: "4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:51:07.353214   45161 cri.go:89] found id: ""
	I1210 06:51:07.353239   45161 logs.go:282] 1 containers: [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0]
	I1210 06:51:07.353300   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:51:07.359109   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:51:07.359186   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:51:07.398111   45161 cri.go:89] found id: ""
	I1210 06:51:07.398143   45161 logs.go:282] 0 containers: []
	W1210 06:51:07.398155   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:51:07.398162   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:51:07.398226   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:51:07.438520   45161 cri.go:89] found id: ""
	I1210 06:51:07.438543   45161 logs.go:282] 0 containers: []
	W1210 06:51:07.438551   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:51:07.438567   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:51:07.438578   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:51:07.531063   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:51:07.531087   45161 logs.go:123] Gathering logs for kube-apiserver [5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6] ...
	I1210 06:51:07.531100   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cd1758911fa163105cdfbcc12004db3d8eb90e72fc38ef7314d661db3ddeca6"
	I1210 06:51:07.577368   45161 logs.go:123] Gathering logs for kube-controller-manager [4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0] ...
	I1210 06:51:07.577421   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc3019e1aedb5d55ebc4844d32a6d0f52fb0e2d2bb81232a839d8efc6f973e0"
	I1210 06:51:07.617954   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:51:07.617988   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:51:07.910138   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:51:07.910176   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:51:07.951179   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:51:07.951210   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:51:08.061809   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:51:08.061847   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:51:08.084594   45161 logs.go:123] Gathering logs for etcd [522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6] ...
	I1210 06:51:08.084631   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 522b5ca6d919e682e6082587df415b8ad371e1237ec329fe71c50d8d99ef73e6"
	I1210 06:51:08.136718   45161 logs.go:123] Gathering logs for kube-scheduler [494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97] ...
	I1210 06:51:08.136745   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494921037a2d1c85d1c601e1bbc5c3a003a5b749cbc0639f5010481a1ac2df97"
	I1210 06:51:08.174081   45161 logs.go:123] Gathering logs for kube-scheduler [8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829] ...
	I1210 06:51:08.174111   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d1028dffad16d97892b198eb725bcb56175432924156a0048a287f418032829"
	I1210 06:51:10.712130   45161 api_server.go:253] Checking apiserver healthz at https://192.168.50.121:8443/healthz ...
	I1210 06:51:10.712883   45161 api_server.go:269] stopped: https://192.168.50.121:8443/healthz: Get "https://192.168.50.121:8443/healthz": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:51:10.712959   45161 kubeadm.go:602] duration metric: took 4m7.133893028s to restartPrimaryControlPlane
	W1210 06:51:10.713021   45161 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:51:10.713095   45161 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:51:13.404795   45161 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.691671548s)
	I1210 06:51:13.404887   45161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:51:13.428915   45161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:51:13.447124   45161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:51:13.462973   45161 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:51:13.462996   45161 kubeadm.go:158] found existing configuration files:
	
	I1210 06:51:13.463049   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:51:13.478847   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:51:13.478933   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:51:13.495106   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:51:13.508873   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:51:13.508955   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:51:13.524313   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:51:13.539070   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:51:13.539127   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:51:13.554912   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:51:13.569593   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:51:13.569685   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:51:13.584987   45161 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 06:51:13.647289   45161 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:51:13.647379   45161 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:51:13.839187   45161 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:51:13.839336   45161 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:51:13.839490   45161 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:51:13.853858   45161 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:51:13.857464   45161 out.go:252]   - Generating certificates and keys ...
	I1210 06:51:13.857582   45161 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:51:13.857696   45161 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:51:13.857817   45161 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:51:13.857931   45161 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:51:13.858040   45161 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:51:13.858117   45161 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:51:13.858212   45161 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:51:13.858342   45161 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:51:13.858480   45161 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:51:13.858591   45161 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:51:13.858644   45161 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:51:13.858725   45161 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:51:13.921147   45161 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:51:13.952796   45161 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:51:14.104560   45161 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:51:14.219995   45161 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:51:14.256391   45161 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:51:14.257053   45161 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:51:14.259545   45161 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:51:14.320543   45161 out.go:252]   - Booting up control plane ...
	I1210 06:51:14.320700   45161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:51:14.320814   45161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:51:14.320901   45161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:51:14.321050   45161 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:51:14.321191   45161 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:51:14.321374   45161 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:51:14.321507   45161 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:51:14.321568   45161 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:51:14.535131   45161 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:51:14.535297   45161 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:51:15.537225   45161 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002060029s
	I1210 06:51:15.541092   45161 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:51:15.541224   45161 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	I1210 06:51:15.541382   45161 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:51:15.541489   45161 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:51:17.047043   45161 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.506296008s
	I1210 06:51:36.912683   45161 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 21.371628627s
	I1210 06:55:15.542553   45161 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.000177525s
	I1210 06:55:15.542587   45161 kubeadm.go:319] 
	I1210 06:55:15.542763   45161 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1210 06:55:15.542850   45161 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 06:55:15.542998   45161 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1210 06:55:15.543124   45161 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 06:55:15.543237   45161 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1210 06:55:15.543338   45161 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1210 06:55:15.543350   45161 kubeadm.go:319] 
	I1210 06:55:15.545322   45161 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:55:15.545751   45161 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: Get "https://192.168.50.121:8443/livez?timeout=10s": dial tcp 192.168.50.121:8443: connect: connection refused
	I1210 06:55:15.545861   45161 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:55:15.546004   45161 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002060029s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.506296008s
	[control-plane-check] kube-scheduler is healthy after 21.371628627s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000177525s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: Get "https://192.168.50.121:8443/livez?timeout=10s": dial tcp 192.168.50.121:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002060029s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.506296008s
	[control-plane-check] kube-scheduler is healthy after 21.371628627s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000177525s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: Get "https://192.168.50.121:8443/livez?timeout=10s": dial tcp 192.168.50.121:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:55:15.546083   45161 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:55:17.102632   45161 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.556514463s)
	I1210 06:55:17.102729   45161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:55:17.129151   45161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:55:17.141477   45161 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:55:17.141494   45161 kubeadm.go:158] found existing configuration files:
	
	I1210 06:55:17.141547   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:55:17.152560   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:55:17.152632   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:55:17.165595   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:55:17.177919   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:55:17.177972   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:55:17.191371   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:55:17.203968   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:55:17.204046   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:55:17.217339   45161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:55:17.228863   45161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:55:17.228939   45161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:55:17.240412   45161 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 06:55:17.287162   45161 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:55:17.287249   45161 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:55:17.438553   45161 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:55:17.438808   45161 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:55:17.438975   45161 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:55:17.450266   45161 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:55:17.452269   45161 out.go:252]   - Generating certificates and keys ...
	I1210 06:55:17.452399   45161 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:55:17.452509   45161 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:55:17.452629   45161 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:55:17.452712   45161 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:55:17.452817   45161 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:55:17.452919   45161 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:55:17.453021   45161 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:55:17.453123   45161 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:55:17.453224   45161 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:55:17.453335   45161 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:55:17.453403   45161 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:55:17.453486   45161 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:55:17.786632   45161 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:55:17.820748   45161 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:55:17.929992   45161 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:55:18.004732   45161 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:55:18.061014   45161 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:55:18.061552   45161 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:55:18.066977   45161 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:55:18.068936   45161 out.go:252]   - Booting up control plane ...
	I1210 06:55:18.069109   45161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:55:18.069218   45161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:55:18.069307   45161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:55:18.093140   45161 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:55:18.093772   45161 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:55:18.103174   45161 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:55:18.103688   45161 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:55:18.103790   45161 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:55:18.289838   45161 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:55:18.290009   45161 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:55:18.791370   45161 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.437538ms
	I1210 06:55:18.794523   45161 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:55:18.794655   45161 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	I1210 06:55:18.794768   45161 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:55:18.794837   45161 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:55:19.801246   45161 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.006030468s
	I1210 06:55:40.779612   45161 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 21.985150772s
	I1210 06:59:18.800501   45161 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.001172988s
	I1210 06:59:18.800541   45161 kubeadm.go:319] 
	I1210 06:59:18.800662   45161 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1210 06:59:18.800803   45161 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 06:59:18.800935   45161 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1210 06:59:18.801088   45161 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 06:59:18.801174   45161 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1210 06:59:18.801278   45161 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1210 06:59:18.801310   45161 kubeadm.go:319] 
	I1210 06:59:18.801460   45161 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:59:18.801786   45161 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1210 06:59:18.801904   45161 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:59:18.801926   45161 kubeadm.go:403] duration metric: took 12m15.332011326s to StartCluster
	I1210 06:59:18.801973   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:59:18.802028   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:59:18.839789   45161 cri.go:89] found id: "003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07"
	I1210 06:59:18.839833   45161 cri.go:89] found id: ""
	I1210 06:59:18.839841   45161 logs.go:282] 1 containers: [003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07]
	I1210 06:59:18.839897   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:59:18.844251   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:59:18.844326   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:59:18.876803   45161 cri.go:89] found id: ""
	I1210 06:59:18.876829   45161 logs.go:282] 0 containers: []
	W1210 06:59:18.876836   45161 logs.go:284] No container was found matching "etcd"
	I1210 06:59:18.876845   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:59:18.876907   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:59:18.908922   45161 cri.go:89] found id: ""
	I1210 06:59:18.908949   45161 logs.go:282] 0 containers: []
	W1210 06:59:18.908960   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:59:18.908967   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:59:18.909032   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:59:18.944065   45161 cri.go:89] found id: "04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4"
	I1210 06:59:18.944097   45161 cri.go:89] found id: ""
	I1210 06:59:18.944105   45161 logs.go:282] 1 containers: [04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4]
	I1210 06:59:18.944158   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:59:18.948742   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:59:18.948815   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:59:18.980123   45161 cri.go:89] found id: ""
	I1210 06:59:18.980151   45161 logs.go:282] 0 containers: []
	W1210 06:59:18.980159   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:59:18.980165   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:59:18.980225   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:59:19.014536   45161 cri.go:89] found id: "f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389"
	I1210 06:59:19.014561   45161 cri.go:89] found id: ""
	I1210 06:59:19.014569   45161 logs.go:282] 1 containers: [f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389]
	I1210 06:59:19.014637   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:59:19.019568   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:59:19.019642   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:59:19.053148   45161 cri.go:89] found id: ""
	I1210 06:59:19.053185   45161 logs.go:282] 0 containers: []
	W1210 06:59:19.053197   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:59:19.053206   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:59:19.053280   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:59:19.084959   45161 cri.go:89] found id: ""
	I1210 06:59:19.084989   45161 logs.go:282] 0 containers: []
	W1210 06:59:19.085001   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:59:19.085013   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:59:19.085031   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:59:19.158878   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:59:19.158908   45161 logs.go:123] Gathering logs for kube-apiserver [003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07] ...
	I1210 06:59:19.158924   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07"
	I1210 06:59:19.195495   45161 logs.go:123] Gathering logs for kube-scheduler [04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4] ...
	I1210 06:59:19.195531   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4"
	I1210 06:59:19.227791   45161 logs.go:123] Gathering logs for kube-controller-manager [f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389] ...
	I1210 06:59:19.227822   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389"
	I1210 06:59:19.260502   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:59:19.260533   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:59:19.512535   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:59:19.512571   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:59:19.551411   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:59:19.551442   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:59:19.649252   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:59:19.649290   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 06:59:19.665711   45161 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.437538ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.006030468s
	[control-plane-check] kube-scheduler is healthy after 21.985150772s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001172988s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:59:19.665784   45161 out.go:285] * 
	* 
	W1210 06:59:19.665854   45161 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.437538ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.006030468s
	[control-plane-check] kube-scheduler is healthy after 21.985150772s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001172988s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.437538ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.006030468s
	[control-plane-check] kube-scheduler is healthy after 21.985150772s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001172988s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:59:19.665868   45161 out.go:285] * 
	* 
	W1210 06:59:19.667900   45161 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:59:19.671401   45161 out.go:203] 
	W1210 06:59:19.672652   45161 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.437538ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.006030468s
	[control-plane-check] kube-scheduler is healthy after 21.985150772s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001172988s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.437538ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.006030468s
	[control-plane-check] kube-scheduler is healthy after 21.985150772s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001172988s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:59:19.672687   45161 out.go:285] * 
	* 
	I1210 06:59:19.674425   45161 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-921183 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-12-10 06:59:20.027755466 +0000 UTC m=+4532.612273586
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-921183 -n kubernetes-upgrade-921183
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-921183 -n kubernetes-upgrade-921183: exit status 2 (201.421443ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-921183 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ ssh     │ guest-747858 ssh df -t ext4 /var/lib/minikube | grep /var/lib/minikube                                                                                                                                                                             │ guest-747858                 │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ ssh     │ guest-747858 ssh df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker                                                                                                                                                                       │ guest-747858                 │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ ssh     │ guest-747858 ssh df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox                                                                                                                                                                               │ guest-747858                 │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ ssh     │ guest-747858 ssh df -t ext4 /var/lib/cni | grep /var/lib/cni                                                                                                                                                                                       │ guest-747858                 │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ ssh     │ guest-747858 ssh df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet                                                                                                                                                                               │ guest-747858                 │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ ssh     │ guest-747858 ssh df -t ext4 /var/lib/docker | grep /var/lib/docker                                                                                                                                                                                 │ guest-747858                 │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ ssh     │ guest-747858 ssh cat /version.json                                                                                                                                                                                                                 │ guest-747858                 │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ ssh     │ guest-747858 ssh test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'                                                                                                                                                                  │ guest-747858                 │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ delete  │ -p guest-747858                                                                                                                                                                                                                                    │ guest-747858                 │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ addons  │ enable metrics-server -p newest-cni-634960 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-634960            │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ stop    │ -p newest-cni-634960 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-634960            │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-634960 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-634960            │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:57 UTC │
	│ start   │ -p newest-cni-634960 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-634960            │ jenkins │ v1.37.0 │ 10 Dec 25 06:57 UTC │ 10 Dec 25 06:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-289565 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-289565 │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │ 10 Dec 25 06:58 UTC │
	│ start   │ -p default-k8s-diff-port-289565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-289565 │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │ 10 Dec 25 06:58 UTC │
	│ image   │ newest-cni-634960 image list --format=json                                                                                                                                                                                                         │ newest-cni-634960            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │ 10 Dec 25 06:58 UTC │
	│ pause   │ -p newest-cni-634960 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-634960            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │ 10 Dec 25 06:58 UTC │
	│ unpause │ -p newest-cni-634960 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-634960            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │ 10 Dec 25 06:58 UTC │
	│ delete  │ -p newest-cni-634960                                                                                                                                                                                                                               │ newest-cni-634960            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │ 10 Dec 25 06:58 UTC │
	│ delete  │ -p newest-cni-634960                                                                                                                                                                                                                               │ newest-cni-634960            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │ 10 Dec 25 06:58 UTC │
	│ image   │ default-k8s-diff-port-289565 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-289565 │ jenkins │ v1.37.0 │ 10 Dec 25 06:59 UTC │ 10 Dec 25 06:59 UTC │
	│ pause   │ -p default-k8s-diff-port-289565 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-289565 │ jenkins │ v1.37.0 │ 10 Dec 25 06:59 UTC │ 10 Dec 25 06:59 UTC │
	│ unpause │ -p default-k8s-diff-port-289565 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-289565 │ jenkins │ v1.37.0 │ 10 Dec 25 06:59 UTC │ 10 Dec 25 06:59 UTC │
	│ delete  │ -p default-k8s-diff-port-289565                                                                                                                                                                                                                    │ default-k8s-diff-port-289565 │ jenkins │ v1.37.0 │ 10 Dec 25 06:59 UTC │ 10 Dec 25 06:59 UTC │
	│ delete  │ -p default-k8s-diff-port-289565                                                                                                                                                                                                                    │ default-k8s-diff-port-289565 │ jenkins │ v1.37.0 │ 10 Dec 25 06:59 UTC │ 10 Dec 25 06:59 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:58:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:58:11.848757   57899 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:58:11.848858   57899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:58:11.848864   57899 out.go:374] Setting ErrFile to fd 2...
	I1210 06:58:11.848869   57899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:58:11.849065   57899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:58:11.849534   57899 out.go:368] Setting JSON to false
	I1210 06:58:11.850408   57899 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6036,"bootTime":1765343856,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:58:11.850469   57899 start.go:143] virtualization: kvm guest
	I1210 06:58:11.855537   57899 out.go:179] * [default-k8s-diff-port-289565] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:58:11.857062   57899 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:58:11.857071   57899 notify.go:221] Checking for updates...
	I1210 06:58:11.859320   57899 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:58:11.860559   57899 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:58:11.861705   57899 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 06:58:11.862779   57899 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:58:11.863925   57899 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:58:11.865441   57899 config.go:182] Loaded profile config "default-k8s-diff-port-289565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:58:11.865939   57899 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:58:11.902153   57899 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 06:58:11.903241   57899 start.go:309] selected driver: kvm2
	I1210 06:58:11.903258   57899 start.go:927] validating driver "kvm2" against &{Name:default-k8s-diff-port-289565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-289565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:58:11.903379   57899 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:58:11.904348   57899 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:58:11.904398   57899 cni.go:84] Creating CNI manager for ""
	I1210 06:58:11.904449   57899 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:58:11.904485   57899 start.go:353] cluster config:
	{Name:default-k8s-diff-port-289565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-289565 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:58:11.904577   57899 iso.go:125] acquiring lock: {Name:mk873366a783b9f735599145b1cff21faf50318e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:58:11.906106   57899 out.go:179] * Starting "default-k8s-diff-port-289565" primary control-plane node in "default-k8s-diff-port-289565" cluster
	I1210 06:58:11.907185   57899 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:58:11.907231   57899 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 06:58:11.907252   57899 cache.go:65] Caching tarball of preloaded images
	I1210 06:58:11.907349   57899 preload.go:238] Found /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:58:11.907370   57899 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 06:58:11.907477   57899 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/default-k8s-diff-port-289565/config.json ...
	I1210 06:58:11.907712   57899 start.go:360] acquireMachinesLock for default-k8s-diff-port-289565: {Name:mkc15d5369b31c34b8a5517a09471706fa3f291a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 06:58:11.907760   57899 start.go:364] duration metric: took 27.834µs to acquireMachinesLock for "default-k8s-diff-port-289565"
	I1210 06:58:11.907781   57899 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:58:11.907787   57899 fix.go:54] fixHost starting: 
	I1210 06:58:11.909641   57899 fix.go:112] recreateIfNeeded on default-k8s-diff-port-289565: state=Stopped err=<nil>
	W1210 06:58:11.909662   57899 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:58:11.818111   57657 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 06:58:11.818188   57657 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 06:58:11.819280   57657 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:58:11.819299   57657 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:58:11.819644   57657 main.go:143] libmachine: domain newest-cni-634960 has defined MAC address 52:54:00:fd:dd:55 in network mk-newest-cni-634960
	I1210 06:58:11.820294   57657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fd:dd:55", ip: ""} in network mk-newest-cni-634960: {Iface:virbr5 ExpiryTime:2025-12-10 07:57:54 +0000 UTC Type:0 Mac:52:54:00:fd:dd:55 Iaid: IPaddr:192.168.83.229 Prefix:24 Hostname:newest-cni-634960 Clientid:01:52:54:00:fd:dd:55}
	I1210 06:58:11.820321   57657 main.go:143] libmachine: domain newest-cni-634960 has defined IP address 192.168.83.229 and MAC address 52:54:00:fd:dd:55 in network mk-newest-cni-634960
	I1210 06:58:11.820652   57657 sshutil.go:53] new ssh client: &{IP:192.168.83.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/newest-cni-634960/id_rsa Username:docker}
	I1210 06:58:11.820685   57657 main.go:143] libmachine: domain newest-cni-634960 has defined MAC address 52:54:00:fd:dd:55 in network mk-newest-cni-634960
	I1210 06:58:11.821186   57657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fd:dd:55", ip: ""} in network mk-newest-cni-634960: {Iface:virbr5 ExpiryTime:2025-12-10 07:57:54 +0000 UTC Type:0 Mac:52:54:00:fd:dd:55 Iaid: IPaddr:192.168.83.229 Prefix:24 Hostname:newest-cni-634960 Clientid:01:52:54:00:fd:dd:55}
	I1210 06:58:11.821217   57657 main.go:143] libmachine: domain newest-cni-634960 has defined IP address 192.168.83.229 and MAC address 52:54:00:fd:dd:55 in network mk-newest-cni-634960
	I1210 06:58:11.822005   57657 sshutil.go:53] new ssh client: &{IP:192.168.83.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/newest-cni-634960/id_rsa Username:docker}
	I1210 06:58:11.823292   57657 main.go:143] libmachine: domain newest-cni-634960 has defined MAC address 52:54:00:fd:dd:55 in network mk-newest-cni-634960
	I1210 06:58:11.823688   57657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fd:dd:55", ip: ""} in network mk-newest-cni-634960: {Iface:virbr5 ExpiryTime:2025-12-10 07:57:54 +0000 UTC Type:0 Mac:52:54:00:fd:dd:55 Iaid: IPaddr:192.168.83.229 Prefix:24 Hostname:newest-cni-634960 Clientid:01:52:54:00:fd:dd:55}
	I1210 06:58:11.823711   57657 main.go:143] libmachine: domain newest-cni-634960 has defined IP address 192.168.83.229 and MAC address 52:54:00:fd:dd:55 in network mk-newest-cni-634960
	I1210 06:58:11.823849   57657 main.go:143] libmachine: domain newest-cni-634960 has defined MAC address 52:54:00:fd:dd:55 in network mk-newest-cni-634960
	I1210 06:58:11.823900   57657 sshutil.go:53] new ssh client: &{IP:192.168.83.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/newest-cni-634960/id_rsa Username:docker}
	I1210 06:58:11.824335   57657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fd:dd:55", ip: ""} in network mk-newest-cni-634960: {Iface:virbr5 ExpiryTime:2025-12-10 07:57:54 +0000 UTC Type:0 Mac:52:54:00:fd:dd:55 Iaid: IPaddr:192.168.83.229 Prefix:24 Hostname:newest-cni-634960 Clientid:01:52:54:00:fd:dd:55}
	I1210 06:58:11.824382   57657 main.go:143] libmachine: domain newest-cni-634960 has defined IP address 192.168.83.229 and MAC address 52:54:00:fd:dd:55 in network mk-newest-cni-634960
	I1210 06:58:11.824562   57657 sshutil.go:53] new ssh client: &{IP:192.168.83.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/newest-cni-634960/id_rsa Username:docker}
	I1210 06:58:12.105192   57657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:58:12.129236   57657 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:58:12.129320   57657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:58:12.158995   57657 api_server.go:72] duration metric: took 348.841447ms to wait for apiserver process to appear ...
	I1210 06:58:12.159024   57657 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:58:12.159047   57657 api_server.go:253] Checking apiserver healthz at https://192.168.83.229:8443/healthz ...
	I1210 06:58:12.167095   57657 api_server.go:279] https://192.168.83.229:8443/healthz returned 200:
	ok
	I1210 06:58:12.168088   57657 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 06:58:12.168114   57657 api_server.go:131] duration metric: took 9.081904ms to wait for apiserver health ...
	I1210 06:58:12.168124   57657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:58:12.172142   57657 system_pods.go:59] 8 kube-system pods found
	I1210 06:58:12.172178   57657 system_pods.go:61] "coredns-7d764666f9-dghgw" [32bbaa19-c1ea-4b21-85c7-05d4b8027bd3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:58:12.172189   57657 system_pods.go:61] "etcd-newest-cni-634960" [a86ad5b9-aeb4-4897-9294-dbb5eabab06f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:58:12.172201   57657 system_pods.go:61] "kube-apiserver-newest-cni-634960" [a8fad731-5215-406b-b6a5-3e1d1ad17b2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:58:12.172210   57657 system_pods.go:61] "kube-controller-manager-newest-cni-634960" [9ca22733-48c4-4eab-b562-95ba95f7fa69] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:58:12.172221   57657 system_pods.go:61] "kube-proxy-nrmkj" [b7802c02-2dad-4380-917c-7e47dbe85553] Running
	I1210 06:58:12.172230   57657 system_pods.go:61] "kube-scheduler-newest-cni-634960" [12eb0aac-e2b2-421f-90da-a798abfcd4f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:58:12.172243   57657 system_pods.go:61] "metrics-server-5d785b57d4-jb5w4" [e4b7cc4e-5e70-4e7e-b8b4-7343a57daebe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:58:12.172249   57657 system_pods.go:61] "storage-provisioner" [51a27277-2fc4-4a66-936e-e311dc6ad7ce] Running
	I1210 06:58:12.172257   57657 system_pods.go:74] duration metric: took 4.125187ms to wait for pod list to return data ...
	I1210 06:58:12.172270   57657 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:58:12.174972   57657 default_sa.go:45] found service account: "default"
	I1210 06:58:12.174991   57657 default_sa.go:55] duration metric: took 2.714429ms for default service account to be created ...
	I1210 06:58:12.175001   57657 kubeadm.go:587] duration metric: took 364.850975ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:58:12.175015   57657 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:58:12.178341   57657 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 06:58:12.178376   57657 node_conditions.go:123] node cpu capacity is 2
	I1210 06:58:12.178390   57657 node_conditions.go:105] duration metric: took 3.370216ms to run NodePressure ...
	I1210 06:58:12.178407   57657 start.go:242] waiting for startup goroutines ...
	I1210 06:58:12.272423   57657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:58:12.284663   57657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:58:12.291526   57657 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 06:58:12.291545   57657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 06:58:12.295902   57657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:58:12.295924   57657 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:58:12.351015   57657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:58:12.351053   57657 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:58:12.351015   57657 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 06:58:12.351116   57657 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 06:58:12.399432   57657 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 06:58:12.399458   57657 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 06:58:12.419590   57657 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:58:12.419616   57657 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:58:12.454173   57657 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:58:12.454203   57657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:58:12.480174   57657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 06:58:12.531112   57657 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:58:12.531141   57657 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:58:12.577308   57657 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:58:12.577336   57657 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:58:12.641079   57657 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:58:12.641112   57657 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:58:12.697447   57657 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:58:12.697475   57657 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:58:12.743534   57657 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:58:12.743563   57657 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:58:12.797603   57657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:58:13.967238   57657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.682534602s)
	I1210 06:58:14.066451   57657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.586228222s)
	I1210 06:58:14.066502   57657 addons.go:495] Verifying addon metrics-server=true in "newest-cni-634960"
	I1210 06:58:14.220467   57657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.422797744s)
	I1210 06:58:14.222018   57657 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-634960 addons enable metrics-server
	
	I1210 06:58:14.223369   57657 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1210 06:58:14.224300   57657 addons.go:530] duration metric: took 2.414110641s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1210 06:58:14.224348   57657 start.go:247] waiting for cluster config update ...
	I1210 06:58:14.224376   57657 start.go:256] writing updated cluster config ...
	I1210 06:58:14.224749   57657 ssh_runner.go:195] Run: rm -f paused
	I1210 06:58:14.292672   57657 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 06:58:14.294231   57657 out.go:179] * Done! kubectl is now configured to use "newest-cni-634960" cluster and "default" namespace by default
	I1210 06:58:11.911287   57899 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-289565" ...
	I1210 06:58:11.911349   57899 main.go:143] libmachine: starting domain...
	I1210 06:58:11.911380   57899 main.go:143] libmachine: ensuring networks are active...
	I1210 06:58:11.912385   57899 main.go:143] libmachine: Ensuring network default is active
	I1210 06:58:11.912833   57899 main.go:143] libmachine: Ensuring network mk-default-k8s-diff-port-289565 is active
	I1210 06:58:11.913621   57899 main.go:143] libmachine: getting domain XML...
	I1210 06:58:11.914989   57899 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-289565</name>
	  <uuid>d2e75319-fcde-40f1-9af9-525ae39e0e81</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/default-k8s-diff-port-289565/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/default-k8s-diff-port-289565/default-k8s-diff-port-289565.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:35:4c:2d'/>
	      <source network='mk-default-k8s-diff-port-289565'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:15:6d:71'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 06:58:13.294546   57899 main.go:143] libmachine: waiting for domain to start...
	I1210 06:58:13.296318   57899 main.go:143] libmachine: domain is now running
	I1210 06:58:13.296378   57899 main.go:143] libmachine: waiting for IP...
	I1210 06:58:13.297529   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:13.298404   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has current primary IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:13.298423   57899 main.go:143] libmachine: found domain IP: 192.168.39.74
	I1210 06:58:13.298431   57899 main.go:143] libmachine: reserving static IP address...
	I1210 06:58:13.298981   57899 main.go:143] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-289565", mac: "52:54:00:35:4c:2d", ip: "192.168.39.74"} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:55:30 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:13.299025   57899 main.go:143] libmachine: skip adding static IP to network mk-default-k8s-diff-port-289565 - found existing host DHCP lease matching {name: "default-k8s-diff-port-289565", mac: "52:54:00:35:4c:2d", ip: "192.168.39.74"}
	I1210 06:58:13.299039   57899 main.go:143] libmachine: reserved static IP address 192.168.39.74 for domain default-k8s-diff-port-289565
	I1210 06:58:13.299054   57899 main.go:143] libmachine: waiting for SSH...
	I1210 06:58:13.299066   57899 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 06:58:13.302259   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:13.302735   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:55:30 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:13.302773   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:13.302978   57899 main.go:143] libmachine: Using SSH client type: native
	I1210 06:58:13.303318   57899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I1210 06:58:13.303335   57899 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 06:58:16.382609   57899 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.74:22: connect: no route to host
	I1210 06:58:22.462664   57899 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.74:22: connect: no route to host
	I1210 06:58:25.574083   57899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:58:25.578114   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.578556   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:25.578594   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.578866   57899 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/default-k8s-diff-port-289565/config.json ...
	I1210 06:58:25.579045   57899 machine.go:94] provisionDockerMachine start ...
	I1210 06:58:25.581730   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.582237   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:25.582268   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.582508   57899 main.go:143] libmachine: Using SSH client type: native
	I1210 06:58:25.582754   57899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I1210 06:58:25.582768   57899 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:58:25.700204   57899 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 06:58:25.700233   57899 buildroot.go:166] provisioning hostname "default-k8s-diff-port-289565"
	I1210 06:58:25.703447   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.703935   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:25.703967   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.704262   57899 main.go:143] libmachine: Using SSH client type: native
	I1210 06:58:25.704620   57899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I1210 06:58:25.704637   57899 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-289565 && echo "default-k8s-diff-port-289565" | sudo tee /etc/hostname
	I1210 06:58:25.832639   57899 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-289565
	
	I1210 06:58:25.835571   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.836063   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:25.836100   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.836421   57899 main.go:143] libmachine: Using SSH client type: native
	I1210 06:58:25.836723   57899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I1210 06:58:25.836745   57899 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-289565' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-289565/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-289565' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:58:25.951834   57899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:58:25.951883   57899 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8667/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8667/.minikube}
	I1210 06:58:25.951923   57899 buildroot.go:174] setting up certificates
	I1210 06:58:25.951939   57899 provision.go:84] configureAuth start
	I1210 06:58:25.955000   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.955540   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:25.955577   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.957841   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.958197   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:25.958218   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.958345   57899 provision.go:143] copyHostCerts
	I1210 06:58:25.958416   57899 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem, removing ...
	I1210 06:58:25.958434   57899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem
	I1210 06:58:25.958525   57899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem (1123 bytes)
	I1210 06:58:25.958666   57899 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem, removing ...
	I1210 06:58:25.958679   57899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem
	I1210 06:58:25.958723   57899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem (1675 bytes)
	I1210 06:58:25.958830   57899 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem, removing ...
	I1210 06:58:25.958841   57899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem
	I1210 06:58:25.958876   57899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem (1082 bytes)
	I1210 06:58:25.958964   57899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-289565 san=[127.0.0.1 192.168.39.74 default-k8s-diff-port-289565 localhost minikube]
	I1210 06:58:25.975840   57899 provision.go:177] copyRemoteCerts
	I1210 06:58:25.975893   57899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:58:25.978462   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.978839   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:25.978861   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:25.979007   57899 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/default-k8s-diff-port-289565/id_rsa Username:docker}
	I1210 06:58:26.062023   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:58:26.090886   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:58:26.119223   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:58:26.148586   57899 provision.go:87] duration metric: took 196.619756ms to configureAuth
	I1210 06:58:26.148615   57899 buildroot.go:189] setting minikube options for container-runtime
	I1210 06:58:26.148860   57899 config.go:182] Loaded profile config "default-k8s-diff-port-289565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:58:26.151960   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.152384   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:26.152418   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.152629   57899 main.go:143] libmachine: Using SSH client type: native
	I1210 06:58:26.152983   57899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I1210 06:58:26.153014   57899 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:58:26.398167   57899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:58:26.398194   57899 machine.go:97] duration metric: took 819.136879ms to provisionDockerMachine
	I1210 06:58:26.398205   57899 start.go:293] postStartSetup for "default-k8s-diff-port-289565" (driver="kvm2")
	I1210 06:58:26.398215   57899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:58:26.398279   57899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:58:26.401504   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.401929   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:26.401954   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.402117   57899 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/default-k8s-diff-port-289565/id_rsa Username:docker}
	I1210 06:58:26.485223   57899 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:58:26.490042   57899 info.go:137] Remote host: Buildroot 2025.02
	I1210 06:58:26.490065   57899 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/addons for local assets ...
	I1210 06:58:26.490131   57899 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/files for local assets ...
	I1210 06:58:26.490212   57899 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem -> 125882.pem in /etc/ssl/certs
	I1210 06:58:26.490293   57899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:58:26.502326   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem --> /etc/ssl/certs/125882.pem (1708 bytes)
	I1210 06:58:26.531368   57899 start.go:296] duration metric: took 133.134511ms for postStartSetup
	I1210 06:58:26.531417   57899 fix.go:56] duration metric: took 14.623628737s for fixHost
	I1210 06:58:26.534168   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.534629   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:26.534656   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.534862   57899 main.go:143] libmachine: Using SSH client type: native
	I1210 06:58:26.535057   57899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I1210 06:58:26.535067   57899 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 06:58:26.641213   57899 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765349906.587866713
	
	I1210 06:58:26.641237   57899 fix.go:216] guest clock: 1765349906.587866713
	I1210 06:58:26.641244   57899 fix.go:229] Guest: 2025-12-10 06:58:26.587866713 +0000 UTC Remote: 2025-12-10 06:58:26.531422658 +0000 UTC m=+14.736328619 (delta=56.444055ms)
	I1210 06:58:26.641261   57899 fix.go:200] guest clock delta is within tolerance: 56.444055ms
	I1210 06:58:26.641282   57899 start.go:83] releasing machines lock for "default-k8s-diff-port-289565", held for 14.733492588s
	I1210 06:58:26.644406   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.644888   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:26.644926   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.645588   57899 ssh_runner.go:195] Run: cat /version.json
	I1210 06:58:26.645692   57899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:58:26.648816   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.648932   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.649277   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:26.649303   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.649376   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:26.649409   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:26.649460   57899 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/default-k8s-diff-port-289565/id_rsa Username:docker}
	I1210 06:58:26.649687   57899 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/default-k8s-diff-port-289565/id_rsa Username:docker}
	I1210 06:58:26.726137   57899 ssh_runner.go:195] Run: systemctl --version
	I1210 06:58:26.762623   57899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:58:26.910948   57899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:58:26.917692   57899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:58:26.917783   57899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:58:26.938007   57899 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:58:26.938032   57899 start.go:496] detecting cgroup driver to use...
	I1210 06:58:26.938095   57899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:58:26.957769   57899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:58:26.975206   57899 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:58:26.975262   57899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:58:26.992992   57899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:58:27.010153   57899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:58:27.168425   57899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:58:27.396029   57899 docker.go:234] disabling docker service ...
	I1210 06:58:27.396096   57899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:58:27.412311   57899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:58:27.427147   57899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:58:27.584824   57899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:58:27.728328   57899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:58:27.744273   57899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:58:27.766574   57899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:58:27.766662   57899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:58:27.778632   57899 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:58:27.778723   57899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:58:27.790984   57899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:58:27.803640   57899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:58:27.815841   57899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:58:27.829005   57899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:58:27.841272   57899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:58:27.861972   57899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:58:27.874154   57899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:58:27.884458   57899 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 06:58:27.884545   57899 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 06:58:27.904631   57899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:58:27.916584   57899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:58:28.063430   57899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:58:28.186310   57899 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:58:28.186408   57899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:58:28.191863   57899 start.go:564] Will wait 60s for crictl version
	I1210 06:58:28.191949   57899 ssh_runner.go:195] Run: which crictl
	I1210 06:58:28.196096   57899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 06:58:28.233798   57899 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 06:58:28.233892   57899 ssh_runner.go:195] Run: crio --version
	I1210 06:58:28.262780   57899 ssh_runner.go:195] Run: crio --version
	I1210 06:58:28.294184   57899 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1210 06:58:28.298815   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:28.299261   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:28.299287   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:28.299509   57899 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 06:58:28.304214   57899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:58:28.319207   57899 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-289565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.2 ClusterName:default-k8s-diff-port-289565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:58:28.319333   57899 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:58:28.319383   57899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:58:28.351715   57899 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1210 06:58:28.351784   57899 ssh_runner.go:195] Run: which lz4
	I1210 06:58:28.356475   57899 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 06:58:28.361532   57899 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 06:58:28.361569   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1210 06:58:29.590201   57899 crio.go:462] duration metric: took 1.233760364s to copy over tarball
	I1210 06:58:29.590297   57899 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 06:58:31.121463   57899 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.53112184s)
	I1210 06:58:31.121499   57899 crio.go:469] duration metric: took 1.531268729s to extract the tarball
	I1210 06:58:31.121507   57899 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 06:58:31.160546   57899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:58:31.203325   57899 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:58:31.203347   57899 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:58:31.203375   57899 kubeadm.go:935] updating node { 192.168.39.74 8444 v1.34.2 crio true true} ...
	I1210 06:58:31.203490   57899 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-289565 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-289565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:58:31.203586   57899 ssh_runner.go:195] Run: crio config
	I1210 06:58:31.250130   57899 cni.go:84] Creating CNI manager for ""
	I1210 06:58:31.250154   57899 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:58:31.250170   57899 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:58:31.250191   57899 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.74 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-289565 NodeName:default-k8s-diff-port-289565 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:58:31.250330   57899 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.74
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-289565"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.74"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.74"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:58:31.250410   57899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 06:58:31.262778   57899 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:58:31.262843   57899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:58:31.274641   57899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1210 06:58:31.295281   57899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:58:31.316419   57899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1210 06:58:31.338548   57899 ssh_runner.go:195] Run: grep 192.168.39.74	control-plane.minikube.internal$ /etc/hosts
	I1210 06:58:31.343261   57899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:58:31.358657   57899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:58:31.499402   57899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:58:31.541303   57899 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/default-k8s-diff-port-289565 for IP: 192.168.39.74
	I1210 06:58:31.541331   57899 certs.go:195] generating shared ca certs ...
	I1210 06:58:31.541350   57899 certs.go:227] acquiring lock for ca certs: {Name:mkbf1082c8328cc7c1360f5f8b344958e8aa5792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:58:31.541602   57899 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key
	I1210 06:58:31.541662   57899 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key
	I1210 06:58:31.541676   57899 certs.go:257] generating profile certs ...
	I1210 06:58:31.541817   57899 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/default-k8s-diff-port-289565/client.key
	I1210 06:58:31.541912   57899 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/default-k8s-diff-port-289565/apiserver.key.040095f6
	I1210 06:58:31.541975   57899 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/default-k8s-diff-port-289565/proxy-client.key
	I1210 06:58:31.542130   57899 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588.pem (1338 bytes)
	W1210 06:58:31.542177   57899 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588_empty.pem, impossibly tiny 0 bytes
	I1210 06:58:31.542193   57899 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:58:31.542233   57899 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:58:31.542277   57899 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:58:31.542308   57899 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem (1675 bytes)
	I1210 06:58:31.542400   57899 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem (1708 bytes)
	I1210 06:58:31.543274   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:58:31.574515   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:58:31.610270   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:58:31.639442   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:58:31.669576   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/default-k8s-diff-port-289565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:58:31.699504   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/default-k8s-diff-port-289565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:58:31.730269   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/default-k8s-diff-port-289565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:58:31.761049   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/default-k8s-diff-port-289565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:58:31.791569   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588.pem --> /usr/share/ca-certificates/12588.pem (1338 bytes)
	I1210 06:58:31.822152   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem --> /usr/share/ca-certificates/125882.pem (1708 bytes)
	I1210 06:58:31.852340   57899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:58:31.882209   57899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:58:31.902933   57899 ssh_runner.go:195] Run: openssl version
	I1210 06:58:31.909582   57899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12588.pem
	I1210 06:58:31.921091   57899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12588.pem /etc/ssl/certs/12588.pem
	I1210 06:58:31.933114   57899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12588.pem
	I1210 06:58:31.938318   57899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:56 /usr/share/ca-certificates/12588.pem
	I1210 06:58:31.938396   57899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12588.pem
	I1210 06:58:31.945688   57899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:58:31.957005   57899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12588.pem /etc/ssl/certs/51391683.0
	I1210 06:58:31.968350   57899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/125882.pem
	I1210 06:58:31.979957   57899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/125882.pem /etc/ssl/certs/125882.pem
	I1210 06:58:31.991540   57899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125882.pem
	I1210 06:58:31.997457   57899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:56 /usr/share/ca-certificates/125882.pem
	I1210 06:58:31.997535   57899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125882.pem
	I1210 06:58:32.004831   57899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:58:32.016711   57899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/125882.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:58:32.028744   57899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:58:32.040242   57899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:58:32.051955   57899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:58:32.057491   57899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:58:32.057566   57899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:58:32.064803   57899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:58:32.076328   57899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:58:32.088102   57899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:58:32.093702   57899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:58:32.102382   57899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:58:32.110783   57899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:58:32.118873   57899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:58:32.127068   57899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:58:32.135144   57899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:58:32.143000   57899 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-289565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.2 ClusterName:default-k8s-diff-port-289565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:58:32.143088   57899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:58:32.143155   57899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:58:32.181268   57899 cri.go:89] found id: ""
	I1210 06:58:32.181349   57899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:58:32.193585   57899 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:58:32.193613   57899 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:58:32.193668   57899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:58:32.205471   57899 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:58:32.206207   57899 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-289565" does not appear in /home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:58:32.206572   57899 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-8667/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-289565" cluster setting kubeconfig missing "default-k8s-diff-port-289565" context setting]
	I1210 06:58:32.207232   57899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/kubeconfig: {Name:mke7eeebab9139e29de7a6356b74da28e2a42365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:58:32.208943   57899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:58:32.222923   57899 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.74
	I1210 06:58:32.222955   57899 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:58:32.222971   57899 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 06:58:32.223035   57899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:58:32.263009   57899 cri.go:89] found id: ""
	I1210 06:58:32.263095   57899 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:58:32.290337   57899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:58:32.303177   57899 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:58:32.303202   57899 kubeadm.go:158] found existing configuration files:
	
	I1210 06:58:32.303259   57899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 06:58:32.314418   57899 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:58:32.314506   57899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:58:32.326607   57899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 06:58:32.337531   57899 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:58:32.337603   57899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:58:32.349419   57899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 06:58:32.360994   57899 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:58:32.361059   57899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:58:32.373393   57899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 06:58:32.384726   57899 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:58:32.384786   57899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:58:32.396451   57899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:58:32.408632   57899 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:58:32.464000   57899 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:58:33.981934   57899 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.51789857s)
	I1210 06:58:33.982015   57899 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:58:34.231620   57899 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:58:34.303808   57899 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:58:34.381224   57899 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:58:34.381320   57899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:58:34.882379   57899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:58:35.381402   57899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:58:35.882299   57899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:58:35.924917   57899 api_server.go:72] duration metric: took 1.543703117s to wait for apiserver process to appear ...
	I1210 06:58:35.924954   57899 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:58:35.924976   57899 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8444/healthz ...
	I1210 06:58:35.925454   57899 api_server.go:269] stopped: https://192.168.39.74:8444/healthz: Get "https://192.168.39.74:8444/healthz": dial tcp 192.168.39.74:8444: connect: connection refused
	I1210 06:58:36.425181   57899 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8444/healthz ...
	I1210 06:58:38.947218   57899 api_server.go:279] https://192.168.39.74:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:58:38.947248   57899 api_server.go:103] status: https://192.168.39.74:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:58:38.947264   57899 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8444/healthz ...
	I1210 06:58:38.995286   57899 api_server.go:279] https://192.168.39.74:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:58:38.995314   57899 api_server.go:103] status: https://192.168.39.74:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:58:39.425931   57899 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8444/healthz ...
	I1210 06:58:39.442475   57899 api_server.go:279] https://192.168.39.74:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:58:39.442512   57899 api_server.go:103] status: https://192.168.39.74:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:58:39.925150   57899 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8444/healthz ...
	I1210 06:58:39.935199   57899 api_server.go:279] https://192.168.39.74:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:58:39.935237   57899 api_server.go:103] status: https://192.168.39.74:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:58:40.425966   57899 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8444/healthz ...
	I1210 06:58:40.440096   57899 api_server.go:279] https://192.168.39.74:8444/healthz returned 200:
	ok
	I1210 06:58:40.448476   57899 api_server.go:141] control plane version: v1.34.2
	I1210 06:58:40.448513   57899 api_server.go:131] duration metric: took 4.523545639s to wait for apiserver health ...
	I1210 06:58:40.448540   57899 cni.go:84] Creating CNI manager for ""
	I1210 06:58:40.448548   57899 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:58:40.450393   57899 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 06:58:40.451650   57899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 06:58:40.471711   57899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 06:58:40.503706   57899 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:58:40.508191   57899 system_pods.go:59] 8 kube-system pods found
	I1210 06:58:40.508232   57899 system_pods.go:61] "coredns-66bc5c9577-7crp5" [70596b09-12b4-4edf-9f21-621113cd8744] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:58:40.508245   57899 system_pods.go:61] "etcd-default-k8s-diff-port-289565" [8e35798f-9095-4a0e-a8a4-df2e772211d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:58:40.508257   57899 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-289565" [b0afba81-d996-40a9-b719-2de27983547d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:58:40.508269   57899 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-289565" [287dbd9d-5f91-412c-85dc-a2035cb9d3c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:58:40.508277   57899 system_pods.go:61] "kube-proxy-l98nf" [273c44e9-c94d-4a12-acfa-174f9661d090] Running
	I1210 06:58:40.508286   57899 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-289565" [73921a49-06ca-43b4-a607-18827e2390ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:58:40.508298   57899 system_pods.go:61] "metrics-server-746fcd58dc-2kwc4" [76a91af0-4c31-4244-830a-aeef0841c643] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:58:40.508306   57899 system_pods.go:61] "storage-provisioner" [fea5dcf8-6e32-4240-9e3e-783d6c4bb16a] Running
	I1210 06:58:40.508315   57899 system_pods.go:74] duration metric: took 4.582928ms to wait for pod list to return data ...
	I1210 06:58:40.508333   57899 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:58:40.515087   57899 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 06:58:40.515144   57899 node_conditions.go:123] node cpu capacity is 2
	I1210 06:58:40.515165   57899 node_conditions.go:105] duration metric: took 6.827144ms to run NodePressure ...
	I1210 06:58:40.515239   57899 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:58:40.792066   57899 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1210 06:58:40.801953   57899 kubeadm.go:744] kubelet initialised
	I1210 06:58:40.801976   57899 kubeadm.go:745] duration metric: took 9.884028ms waiting for restarted kubelet to initialise ...
	I1210 06:58:40.801994   57899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:58:40.817484   57899 ops.go:34] apiserver oom_adj: -16
	I1210 06:58:40.817508   57899 kubeadm.go:602] duration metric: took 8.623888198s to restartPrimaryControlPlane
	I1210 06:58:40.817520   57899 kubeadm.go:403] duration metric: took 8.67452929s to StartCluster
	I1210 06:58:40.817542   57899 settings.go:142] acquiring lock: {Name:mk3d395dc9d24e60f90f67efa719ff71be48daf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:58:40.817625   57899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:58:40.818809   57899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/kubeconfig: {Name:mke7eeebab9139e29de7a6356b74da28e2a42365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:58:40.819091   57899 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.74 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:58:40.819213   57899 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:58:40.819306   57899 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-289565"
	I1210 06:58:40.819329   57899 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-289565"
	I1210 06:58:40.819327   57899 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-289565"
	W1210 06:58:40.819337   57899 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:58:40.819346   57899 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-289565"
	I1210 06:58:40.819367   57899 config.go:182] Loaded profile config "default-k8s-diff-port-289565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:58:40.819377   57899 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-289565"
	I1210 06:58:40.819377   57899 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-289565"
	I1210 06:58:40.819376   57899 host.go:66] Checking if "default-k8s-diff-port-289565" exists ...
	I1210 06:58:40.819413   57899 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-289565"
	I1210 06:58:40.819413   57899 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-289565"
	W1210 06:58:40.819426   57899 addons.go:248] addon dashboard should already be in state true
	W1210 06:58:40.819426   57899 addons.go:248] addon metrics-server should already be in state true
	I1210 06:58:40.819457   57899 host.go:66] Checking if "default-k8s-diff-port-289565" exists ...
	I1210 06:58:40.819457   57899 host.go:66] Checking if "default-k8s-diff-port-289565" exists ...
	I1210 06:58:40.820896   57899 out.go:179] * Verifying Kubernetes components...
	I1210 06:58:40.822308   57899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:58:40.822347   57899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:58:40.822389   57899 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 06:58:40.822431   57899 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:58:40.822711   57899 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-289565"
	W1210 06:58:40.822816   57899 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:58:40.822842   57899 host.go:66] Checking if "default-k8s-diff-port-289565" exists ...
	I1210 06:58:40.823666   57899 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:58:40.823680   57899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:58:40.824311   57899 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:58:40.824328   57899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:58:40.824510   57899 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 06:58:40.824528   57899 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 06:58:40.825858   57899 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:58:40.827162   57899 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:58:40.827178   57899 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:58:40.827696   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:40.828110   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:40.828126   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:40.828207   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:40.828238   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:40.828440   57899 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/default-k8s-diff-port-289565/id_rsa Username:docker}
	I1210 06:58:40.828668   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:40.828690   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:40.828700   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:40.828717   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:40.829062   57899 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/default-k8s-diff-port-289565/id_rsa Username:docker}
	I1210 06:58:40.829211   57899 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/default-k8s-diff-port-289565/id_rsa Username:docker}
	I1210 06:58:40.830161   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:40.830577   57899 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:4c:2d", ip: ""} in network mk-default-k8s-diff-port-289565: {Iface:virbr1 ExpiryTime:2025-12-10 07:58:23 +0000 UTC Type:0 Mac:52:54:00:35:4c:2d Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:default-k8s-diff-port-289565 Clientid:01:52:54:00:35:4c:2d}
	I1210 06:58:40.830602   57899 main.go:143] libmachine: domain default-k8s-diff-port-289565 has defined IP address 192.168.39.74 and MAC address 52:54:00:35:4c:2d in network mk-default-k8s-diff-port-289565
	I1210 06:58:40.830770   57899 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/default-k8s-diff-port-289565/id_rsa Username:docker}
	I1210 06:58:41.040897   57899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:58:41.070144   57899 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-289565" to be "Ready" ...
	I1210 06:58:41.076507   57899 node_ready.go:49] node "default-k8s-diff-port-289565" is "Ready"
	I1210 06:58:41.076540   57899 node_ready.go:38] duration metric: took 6.33455ms for node "default-k8s-diff-port-289565" to be "Ready" ...
	I1210 06:58:41.076553   57899 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:58:41.076601   57899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:58:41.111175   57899 api_server.go:72] duration metric: took 292.048171ms to wait for apiserver process to appear ...
	I1210 06:58:41.111204   57899 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:58:41.111226   57899 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8444/healthz ...
	I1210 06:58:41.120794   57899 api_server.go:279] https://192.168.39.74:8444/healthz returned 200:
	ok
	I1210 06:58:41.121735   57899 api_server.go:141] control plane version: v1.34.2
	I1210 06:58:41.121767   57899 api_server.go:131] duration metric: took 10.553988ms to wait for apiserver health ...
	I1210 06:58:41.121780   57899 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:58:41.127384   57899 system_pods.go:59] 8 kube-system pods found
	I1210 06:58:41.127427   57899 system_pods.go:61] "coredns-66bc5c9577-7crp5" [70596b09-12b4-4edf-9f21-621113cd8744] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:58:41.127438   57899 system_pods.go:61] "etcd-default-k8s-diff-port-289565" [8e35798f-9095-4a0e-a8a4-df2e772211d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:58:41.127449   57899 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-289565" [b0afba81-d996-40a9-b719-2de27983547d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:58:41.127466   57899 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-289565" [287dbd9d-5f91-412c-85dc-a2035cb9d3c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:58:41.127482   57899 system_pods.go:61] "kube-proxy-l98nf" [273c44e9-c94d-4a12-acfa-174f9661d090] Running
	I1210 06:58:41.127493   57899 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-289565" [73921a49-06ca-43b4-a607-18827e2390ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:58:41.127501   57899 system_pods.go:61] "metrics-server-746fcd58dc-2kwc4" [76a91af0-4c31-4244-830a-aeef0841c643] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:58:41.127511   57899 system_pods.go:61] "storage-provisioner" [fea5dcf8-6e32-4240-9e3e-783d6c4bb16a] Running
	I1210 06:58:41.127522   57899 system_pods.go:74] duration metric: took 5.733911ms to wait for pod list to return data ...
	I1210 06:58:41.127532   57899 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:58:41.144825   57899 default_sa.go:45] found service account: "default"
	I1210 06:58:41.144851   57899 default_sa.go:55] duration metric: took 17.310042ms for default service account to be created ...
	I1210 06:58:41.144861   57899 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:58:41.152828   57899 system_pods.go:86] 8 kube-system pods found
	I1210 06:58:41.152858   57899 system_pods.go:89] "coredns-66bc5c9577-7crp5" [70596b09-12b4-4edf-9f21-621113cd8744] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:58:41.152868   57899 system_pods.go:89] "etcd-default-k8s-diff-port-289565" [8e35798f-9095-4a0e-a8a4-df2e772211d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:58:41.152874   57899 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-289565" [b0afba81-d996-40a9-b719-2de27983547d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:58:41.152882   57899 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-289565" [287dbd9d-5f91-412c-85dc-a2035cb9d3c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:58:41.152887   57899 system_pods.go:89] "kube-proxy-l98nf" [273c44e9-c94d-4a12-acfa-174f9661d090] Running
	I1210 06:58:41.152892   57899 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-289565" [73921a49-06ca-43b4-a607-18827e2390ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:58:41.152897   57899 system_pods.go:89] "metrics-server-746fcd58dc-2kwc4" [76a91af0-4c31-4244-830a-aeef0841c643] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:58:41.152901   57899 system_pods.go:89] "storage-provisioner" [fea5dcf8-6e32-4240-9e3e-783d6c4bb16a] Running
	I1210 06:58:41.152908   57899 system_pods.go:126] duration metric: took 8.042195ms to wait for k8s-apps to be running ...
	I1210 06:58:41.152915   57899 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:58:41.152958   57899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:58:41.159469   57899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:58:41.217344   57899 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 06:58:41.217389   57899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 06:58:41.224684   57899 system_svc.go:56] duration metric: took 71.759383ms WaitForService to wait for kubelet
	I1210 06:58:41.224717   57899 kubeadm.go:587] duration metric: took 405.593259ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:58:41.224740   57899 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:58:41.231044   57899 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:58:41.231075   57899 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:58:41.231822   57899 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 06:58:41.231855   57899 node_conditions.go:123] node cpu capacity is 2
	I1210 06:58:41.231872   57899 node_conditions.go:105] duration metric: took 7.125042ms to run NodePressure ...
	I1210 06:58:41.231886   57899 start.go:242] waiting for startup goroutines ...
	I1210 06:58:41.233819   57899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:58:41.293059   57899 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:58:41.293090   57899 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:58:41.334146   57899 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 06:58:41.334173   57899 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 06:58:41.376799   57899 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:58:41.376826   57899 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:58:41.481854   57899 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 06:58:41.481884   57899 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 06:58:41.551637   57899 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:58:41.551664   57899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:58:41.602174   57899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 06:58:41.637525   57899 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:58:41.637558   57899 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:58:41.760564   57899 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:58:41.760591   57899 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:58:41.863063   57899 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:58:41.863089   57899 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:58:41.937847   57899 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:58:41.937871   57899 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:58:41.997921   57899 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:58:41.997955   57899 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:58:42.056923   57899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:58:42.901701   57899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.667840162s)
	I1210 06:58:42.901967   57899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.742459914s)
	I1210 06:58:43.045064   57899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.442854711s)
	I1210 06:58:43.045104   57899 addons.go:495] Verifying addon metrics-server=true in "default-k8s-diff-port-289565"
	I1210 06:58:43.538881   57899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.481891763s)
	I1210 06:58:43.540390   57899 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-289565 addons enable metrics-server
	
	I1210 06:58:43.541959   57899 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1210 06:58:43.543659   57899 addons.go:530] duration metric: took 2.724451449s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1210 06:58:43.543699   57899 start.go:247] waiting for cluster config update ...
	I1210 06:58:43.543715   57899 start.go:256] writing updated cluster config ...
	I1210 06:58:43.543958   57899 ssh_runner.go:195] Run: rm -f paused
	I1210 06:58:43.555193   57899 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:58:43.566104   57899 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7crp5" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:58:45.572756   57899 pod_ready.go:104] pod "coredns-66bc5c9577-7crp5" is not "Ready", error: <nil>
	W1210 06:58:47.573287   57899 pod_ready.go:104] pod "coredns-66bc5c9577-7crp5" is not "Ready", error: <nil>
	W1210 06:58:50.072970   57899 pod_ready.go:104] pod "coredns-66bc5c9577-7crp5" is not "Ready", error: <nil>
	I1210 06:58:50.576585   57899 pod_ready.go:94] pod "coredns-66bc5c9577-7crp5" is "Ready"
	I1210 06:58:50.576615   57899 pod_ready.go:86] duration metric: took 7.010487198s for pod "coredns-66bc5c9577-7crp5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:58:50.580089   57899 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-289565" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:58:52.585852   57899 pod_ready.go:104] pod "etcd-default-k8s-diff-port-289565" is not "Ready", error: <nil>
	I1210 06:58:53.087115   57899 pod_ready.go:94] pod "etcd-default-k8s-diff-port-289565" is "Ready"
	I1210 06:58:53.087151   57899 pod_ready.go:86] duration metric: took 2.507032443s for pod "etcd-default-k8s-diff-port-289565" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:58:53.090213   57899 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-289565" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:58:53.097340   57899 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-289565" is "Ready"
	I1210 06:58:53.097373   57899 pod_ready.go:86] duration metric: took 7.134303ms for pod "kube-apiserver-default-k8s-diff-port-289565" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:58:53.100472   57899 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-289565" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:58:53.611701   57899 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-289565" is "Ready"
	I1210 06:58:53.611729   57899 pod_ready.go:86] duration metric: took 511.229713ms for pod "kube-controller-manager-default-k8s-diff-port-289565" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:58:53.615666   57899 pod_ready.go:83] waiting for pod "kube-proxy-l98nf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:58:53.987163   57899 pod_ready.go:94] pod "kube-proxy-l98nf" is "Ready"
	I1210 06:58:53.987189   57899 pod_ready.go:86] duration metric: took 371.49794ms for pod "kube-proxy-l98nf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:58:53.996048   57899 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-289565" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:58:54.372314   57899 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-289565" is "Ready"
	I1210 06:58:54.372374   57899 pod_ready.go:86] duration metric: took 376.274763ms for pod "kube-scheduler-default-k8s-diff-port-289565" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:58:54.372396   57899 pod_ready.go:40] duration metric: took 10.817171185s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:58:54.420954   57899 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:58:54.423027   57899 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-289565" cluster and "default" namespace by default
	I1210 06:59:18.800501   45161 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.001172988s
	I1210 06:59:18.800541   45161 kubeadm.go:319] 
	I1210 06:59:18.800662   45161 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1210 06:59:18.800803   45161 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 06:59:18.800935   45161 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1210 06:59:18.801088   45161 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 06:59:18.801174   45161 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1210 06:59:18.801278   45161 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1210 06:59:18.801310   45161 kubeadm.go:319] 
	I1210 06:59:18.801460   45161 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:59:18.801786   45161 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1210 06:59:18.801904   45161 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:59:18.801926   45161 kubeadm.go:403] duration metric: took 12m15.332011326s to StartCluster
	I1210 06:59:18.801973   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:59:18.802028   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:59:18.839789   45161 cri.go:89] found id: "003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07"
	I1210 06:59:18.839833   45161 cri.go:89] found id: ""
	I1210 06:59:18.839841   45161 logs.go:282] 1 containers: [003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07]
	I1210 06:59:18.839897   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:59:18.844251   45161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:59:18.844326   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:59:18.876803   45161 cri.go:89] found id: ""
	I1210 06:59:18.876829   45161 logs.go:282] 0 containers: []
	W1210 06:59:18.876836   45161 logs.go:284] No container was found matching "etcd"
	I1210 06:59:18.876845   45161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:59:18.876907   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:59:18.908922   45161 cri.go:89] found id: ""
	I1210 06:59:18.908949   45161 logs.go:282] 0 containers: []
	W1210 06:59:18.908960   45161 logs.go:284] No container was found matching "coredns"
	I1210 06:59:18.908967   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:59:18.909032   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:59:18.944065   45161 cri.go:89] found id: "04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4"
	I1210 06:59:18.944097   45161 cri.go:89] found id: ""
	I1210 06:59:18.944105   45161 logs.go:282] 1 containers: [04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4]
	I1210 06:59:18.944158   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:59:18.948742   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:59:18.948815   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:59:18.980123   45161 cri.go:89] found id: ""
	I1210 06:59:18.980151   45161 logs.go:282] 0 containers: []
	W1210 06:59:18.980159   45161 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:59:18.980165   45161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:59:18.980225   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:59:19.014536   45161 cri.go:89] found id: "f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389"
	I1210 06:59:19.014561   45161 cri.go:89] found id: ""
	I1210 06:59:19.014569   45161 logs.go:282] 1 containers: [f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389]
	I1210 06:59:19.014637   45161 ssh_runner.go:195] Run: which crictl
	I1210 06:59:19.019568   45161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:59:19.019642   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:59:19.053148   45161 cri.go:89] found id: ""
	I1210 06:59:19.053185   45161 logs.go:282] 0 containers: []
	W1210 06:59:19.053197   45161 logs.go:284] No container was found matching "kindnet"
	I1210 06:59:19.053206   45161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:59:19.053280   45161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:59:19.084959   45161 cri.go:89] found id: ""
	I1210 06:59:19.084989   45161 logs.go:282] 0 containers: []
	W1210 06:59:19.085001   45161 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:59:19.085013   45161 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:59:19.085031   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:59:19.158878   45161 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:59:19.158908   45161 logs.go:123] Gathering logs for kube-apiserver [003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07] ...
	I1210 06:59:19.158924   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07"
	I1210 06:59:19.195495   45161 logs.go:123] Gathering logs for kube-scheduler [04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4] ...
	I1210 06:59:19.195531   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4"
	I1210 06:59:19.227791   45161 logs.go:123] Gathering logs for kube-controller-manager [f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389] ...
	I1210 06:59:19.227822   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389"
	I1210 06:59:19.260502   45161 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:59:19.260533   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:59:19.512535   45161 logs.go:123] Gathering logs for container status ...
	I1210 06:59:19.512571   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:59:19.551411   45161 logs.go:123] Gathering logs for kubelet ...
	I1210 06:59:19.551442   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:59:19.649252   45161 logs.go:123] Gathering logs for dmesg ...
	I1210 06:59:19.649290   45161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 06:59:19.665711   45161 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.437538ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.006030468s
	[control-plane-check] kube-scheduler is healthy after 21.985150772s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001172988s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:59:19.665784   45161 out.go:285] * 
	W1210 06:59:19.665854   45161 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.437538ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.006030468s
	[control-plane-check] kube-scheduler is healthy after 21.985150772s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001172988s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:59:19.665868   45161 out.go:285] * 
	W1210 06:59:19.667900   45161 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:59:19.671401   45161 out.go:203] 
	W1210 06:59:19.672652   45161 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.437538ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.121:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.006030468s
	[control-plane-check] kube-scheduler is healthy after 21.985150772s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001172988s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.50.121:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:59:19.672687   45161 out.go:285] * 
	I1210 06:59:19.674425   45161 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.581951690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765349960581927849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124925,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e18dd8f-1e6d-48b6-abac-0f6341c8e107 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.582657991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c70d6557-2e54-4b44-8761-b645519d47ba name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.582706646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c70d6557-2e54-4b44-8761-b645519d47ba name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.582796797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07,PodSandboxId:8001bd463e4630fd0f39807ea3252e16af9f9607880d5df25586ee540e97eb04,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1765349885551053559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5676ebdfcb3390f5d5962ea2906e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":
\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389,PodSandboxId:42bf8444d49f1c471c344f57a0d3d651eab910658c774f46b3f10c19a5b1d3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765349882552040894,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
712a5fdc89fc99f70c61173f5b6644,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4,PodSandboxId:975373f7b7640292668aea533870fc1d092a49865f0014d8eb2a8a6445d1fc66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765349719204758298,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes
.pod.name: kube-scheduler-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf81a03cc94837314d4e0f67906143e,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c70d6557-2e54-4b44-8761-b645519d47ba name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.613132945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=effafa13-5a8f-4d41-b87e-b7a9ee3613d6 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.613586863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=effafa13-5a8f-4d41-b87e-b7a9ee3613d6 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.615053441Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4bb4bd46-2572-4f1a-b0be-ccbfc3bfefb6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.616041037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765349960616012307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124925,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bb4bd46-2572-4f1a-b0be-ccbfc3bfefb6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.616818691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d96a2584-4e09-487a-a713-8d9d319d0f36 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.616879145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d96a2584-4e09-487a-a713-8d9d319d0f36 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.616974508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07,PodSandboxId:8001bd463e4630fd0f39807ea3252e16af9f9607880d5df25586ee540e97eb04,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1765349885551053559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5676ebdfcb3390f5d5962ea2906e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":
\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389,PodSandboxId:42bf8444d49f1c471c344f57a0d3d651eab910658c774f46b3f10c19a5b1d3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765349882552040894,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
712a5fdc89fc99f70c61173f5b6644,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4,PodSandboxId:975373f7b7640292668aea533870fc1d092a49865f0014d8eb2a8a6445d1fc66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765349719204758298,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes
.pod.name: kube-scheduler-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf81a03cc94837314d4e0f67906143e,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d96a2584-4e09-487a-a713-8d9d319d0f36 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.650073008Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61cfa129-672e-484a-8cc0-15daedc936f4 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.650160691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61cfa129-672e-484a-8cc0-15daedc936f4 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.651613718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a29d52ac-5272-447c-a2d2-abe8b926efd1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.651966015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765349960651946167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124925,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a29d52ac-5272-447c-a2d2-abe8b926efd1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.652865256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10442020-938f-414b-ae66-b8e22fa2d688 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.652938981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10442020-938f-414b-ae66-b8e22fa2d688 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.653033759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07,PodSandboxId:8001bd463e4630fd0f39807ea3252e16af9f9607880d5df25586ee540e97eb04,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1765349885551053559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5676ebdfcb3390f5d5962ea2906e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":
\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389,PodSandboxId:42bf8444d49f1c471c344f57a0d3d651eab910658c774f46b3f10c19a5b1d3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765349882552040894,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
712a5fdc89fc99f70c61173f5b6644,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4,PodSandboxId:975373f7b7640292668aea533870fc1d092a49865f0014d8eb2a8a6445d1fc66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765349719204758298,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes
.pod.name: kube-scheduler-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf81a03cc94837314d4e0f67906143e,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10442020-938f-414b-ae66-b8e22fa2d688 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.682100502Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18170045-19be-4959-8744-e5b6b42c7e39 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.682191736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18170045-19be-4959-8744-e5b6b42c7e39 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.683237973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8e7114a-5211-450f-845f-f79ec9ab9b7a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.683794291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765349960683768861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124925,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8e7114a-5211-450f-845f-f79ec9ab9b7a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.684824402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24bddc02-9145-4d46-bfcd-d50a2c00f180 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.684956998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24bddc02-9145-4d46-bfcd-d50a2c00f180 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:59:20 kubernetes-upgrade-921183 crio[2375]: time="2025-12-10 06:59:20.685170236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07,PodSandboxId:8001bd463e4630fd0f39807ea3252e16af9f9607880d5df25586ee540e97eb04,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1765349885551053559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5676ebdfcb3390f5d5962ea2906e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":
\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389,PodSandboxId:42bf8444d49f1c471c344f57a0d3d651eab910658c774f46b3f10c19a5b1d3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765349882552040894,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
712a5fdc89fc99f70c61173f5b6644,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4,PodSandboxId:975373f7b7640292668aea533870fc1d092a49865f0014d8eb2a8a6445d1fc66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765349719204758298,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes
.pod.name: kube-scheduler-kubernetes-upgrade-921183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf81a03cc94837314d4e0f67906143e,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24bddc02-9145-4d46-bfcd-d50a2c00f180 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                 NAMESPACE
	003ce810d656a       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   About a minute ago   Exited              kube-apiserver            15                  8001bd463e463       kube-apiserver-kubernetes-upgrade-921183            kube-system
	f5acbd8ab2cbf       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   About a minute ago   Exited              kube-controller-manager   15                  42bf8444d49f1       kube-controller-manager-kubernetes-upgrade-921183   kube-system
	04707ed350f85       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   4 minutes ago        Running             kube-scheduler            4                   975373f7b7640       kube-scheduler-kubernetes-upgrade-921183            kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.100761] kauditd_printk_skb: 85 callbacks suppressed
	[  +2.457826] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.035786] kauditd_printk_skb: 223 callbacks suppressed
	[Dec10 06:47] kauditd_printk_skb: 92 callbacks suppressed
	[ +21.626466] kauditd_printk_skb: 65 callbacks suppressed
	[ +21.164624] kauditd_printk_skb: 20 callbacks suppressed
	[Dec10 06:48] kauditd_printk_skb: 20 callbacks suppressed
	[Dec10 06:49] kauditd_printk_skb: 5 callbacks suppressed
	[ +20.939501] kauditd_printk_skb: 5 callbacks suppressed
	[Dec10 06:51] kauditd_printk_skb: 128 callbacks suppressed
	[ +21.143936] kauditd_printk_skb: 5 callbacks suppressed
	[Dec10 06:52] kauditd_printk_skb: 6 callbacks suppressed
	[ +21.534661] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.005699] kauditd_printk_skb: 5 callbacks suppressed
	[Dec10 06:53] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.732228] kauditd_printk_skb: 5 callbacks suppressed
	[Dec10 06:54] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.960105] kauditd_printk_skb: 5 callbacks suppressed
	[Dec10 06:55] kauditd_printk_skb: 124 callbacks suppressed
	[Dec10 06:56] kauditd_printk_skb: 20 callbacks suppressed
	[ +21.430672] kauditd_printk_skb: 20 callbacks suppressed
	[Dec10 06:57] kauditd_printk_skb: 20 callbacks suppressed
	[Dec10 06:58] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> kernel <==
	 06:59:20 up 14 min,  0 users,  load average: 0.10, 0.19, 0.17
	Linux kubernetes-upgrade-921183 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [003ce810d656a15c8857ffd7d89385c83844083609b07530580574b00d174f07] <==
	W1210 06:58:06.203956       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:06.204013       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1210 06:58:06.204427       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1210 06:58:06.209219       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:58:06.212169       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1210 06:58:06.212188       1 plugins.go:160] Loaded 14 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,NodeDeclaredFeatureValidator,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1210 06:58:06.212441       1 instance.go:240] Using reconciler: lease
	W1210 06:58:06.213485       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:06.213486       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:07.204450       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:07.204463       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:07.214233       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:08.500989       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:08.534223       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:09.085921       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:11.296168       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:11.484707       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:11.581202       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:15.089241       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:15.305469       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:15.782750       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:22.337634       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:22.893192       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:58:22.978265       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1210 06:58:26.213618       1 instance.go:233] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [f5acbd8ab2cbfe2e3be073279bf7325c6bb9aa3473f7d548c7e3d4ec0d9f4389] <==
	I1210 06:58:02.820026       1 serving.go:386] Generated self-signed cert in-memory
	I1210 06:58:02.832871       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1210 06:58:02.832908       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:58:02.834464       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1210 06:58:02.834570       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1210 06:58:02.834685       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1210 06:58:02.835012       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 06:58:27.218982       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.50.121:8443/healthz\": dial tcp 192.168.50.121:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.121:53498->192.168.50.121:8443: read: connection reset by peer"
	
	
	==> kube-scheduler [04707ed350f8566333e6073b699f46c0ddd62a028e2b52fb2d915b819bc31fe4] <==
	I1210 06:55:19.908529       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:55:29.915806       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.50.121:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1210 06:55:29.915877       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:55:29.915884       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:55:40.746893       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1210 06:55:40.747364       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:55:40.755533       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:55:40.757414       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:55:40.759421       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:55:40.757434       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kubelet <==
	Dec 10 06:59:01 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:01.229262   10342 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://192.168.50.121:8443/api/v1/nodes\": dial tcp 192.168.50.121:8443: connect: connection refused" node="kubernetes-upgrade-921183"
	Dec 10 06:59:02 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:02.224958   10342 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.50.121:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-921183?timeout=10s\": dial tcp 192.168.50.121:8443: connect: connection refused" interval="7s"
	Dec 10 06:59:02 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:02.541362   10342 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-921183\" not found" node="kubernetes-upgrade-921183"
	Dec 10 06:59:02 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:02.541450   10342 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-kubernetes-upgrade-921183" containerName="etcd"
	Dec 10 06:59:02 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:02.549344   10342 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_1\" is already in use by d0f2bc62e5b5b3f4a67fde78fbc0e40726d31c69277f4ddfc5a2e90b4e46aea4. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="361608da879b6530d1ac73cb485450077852b6a7a338f2707d245ed7fbbae769"
	Dec 10 06:59:02 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:02.549439   10342 kuberuntime_manager.go:1664] "Unhandled Error" err="container etcd start failed in pod etcd-kubernetes-upgrade-921183_kube-system(fb9c6b716ed479be7c5eb11a56ebe61a): CreateContainerError: the container name \"k8s_etcd_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_1\" is already in use by d0f2bc62e5b5b3f4a67fde78fbc0e40726d31c69277f4ddfc5a2e90b4e46aea4. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 10 06:59:02 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:02.549470   10342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_1\\\" is already in use by d0f2bc62e5b5b3f4a67fde78fbc0e40726d31c69277f4ddfc5a2e90b4e46aea4. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-kubernetes-upgrade-921183" podUID="fb9c6b716ed479be7c5eb11a56ebe61a"
	Dec 10 06:59:04 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:04.180620   10342 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.50.121:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.50.121:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Dec 10 06:59:08 kubernetes-upgrade-921183 kubelet[10342]: I1210 06:59:08.231207   10342 kubelet_node_status.go:74] "Attempting to register node" node="kubernetes-upgrade-921183"
	Dec 10 06:59:08 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:08.232313   10342 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://192.168.50.121:8443/api/v1/nodes\": dial tcp 192.168.50.121:8443: connect: connection refused" node="kubernetes-upgrade-921183"
	Dec 10 06:59:08 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:08.630155   10342 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765349948629681565  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:124925}  inodes_used:{value:49}}"
	Dec 10 06:59:08 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:08.630176   10342 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765349948629681565  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:124925}  inodes_used:{value:49}}"
	Dec 10 06:59:09 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:09.226306   10342 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.50.121:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-921183?timeout=10s\": dial tcp 192.168.50.121:8443: connect: connection refused" interval="7s"
	Dec 10 06:59:10 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:10.711287   10342 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.50.121:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.121:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-921183.187fc8373f5db449  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-921183,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node kubernetes-upgrade-921183 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-921183,},FirstTimestamp:2025-12-10 06:55:18.564385865 +0000 UTC m=+0.284430335,LastTimestamp:2025-12-10 06:55:18.564385865 +0000 UTC m=+0.284430335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIn
stance:kubernetes-upgrade-921183,}"
	Dec 10 06:59:15 kubernetes-upgrade-921183 kubelet[10342]: I1210 06:59:15.234193   10342 kubelet_node_status.go:74] "Attempting to register node" node="kubernetes-upgrade-921183"
	Dec 10 06:59:15 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:15.234736   10342 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://192.168.50.121:8443/api/v1/nodes\": dial tcp 192.168.50.121:8443: connect: connection refused" node="kubernetes-upgrade-921183"
	Dec 10 06:59:16 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:16.227307   10342 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.50.121:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-921183?timeout=10s\": dial tcp 192.168.50.121:8443: connect: connection refused" interval="7s"
	Dec 10 06:59:17 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:17.538478   10342 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-921183\" not found" node="kubernetes-upgrade-921183"
	Dec 10 06:59:17 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:17.538573   10342 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-kubernetes-upgrade-921183" containerName="etcd"
	Dec 10 06:59:17 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:17.546312   10342 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_1\" is already in use by d0f2bc62e5b5b3f4a67fde78fbc0e40726d31c69277f4ddfc5a2e90b4e46aea4. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="361608da879b6530d1ac73cb485450077852b6a7a338f2707d245ed7fbbae769"
	Dec 10 06:59:17 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:17.546410   10342 kuberuntime_manager.go:1664] "Unhandled Error" err="container etcd start failed in pod etcd-kubernetes-upgrade-921183_kube-system(fb9c6b716ed479be7c5eb11a56ebe61a): CreateContainerError: the container name \"k8s_etcd_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_1\" is already in use by d0f2bc62e5b5b3f4a67fde78fbc0e40726d31c69277f4ddfc5a2e90b4e46aea4. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 10 06:59:17 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:17.546455   10342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-kubernetes-upgrade-921183_kube-system_fb9c6b716ed479be7c5eb11a56ebe61a_1\\\" is already in use by d0f2bc62e5b5b3f4a67fde78fbc0e40726d31c69277f4ddfc5a2e90b4e46aea4. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-kubernetes-upgrade-921183" podUID="fb9c6b716ed479be7c5eb11a56ebe61a"
	Dec 10 06:59:18 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:18.632071   10342 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765349958631682295  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:124925}  inodes_used:{value:49}}"
	Dec 10 06:59:18 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:18.632107   10342 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765349958631682295  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:124925}  inodes_used:{value:49}}"
	Dec 10 06:59:20 kubernetes-upgrade-921183 kubelet[10342]: E1210 06:59:20.712508   10342 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.50.121:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.121:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-921183.187fc8373f5db449  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-921183,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node kubernetes-upgrade-921183 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-921183,},FirstTimestamp:2025-12-10 06:55:18.564385865 +0000 UTC m=+0.284430335,LastTimestamp:2025-12-10 06:55:18.564385865 +0000 UTC m=+0.284430335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIn
stance:kubernetes-upgrade-921183,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-921183 -n kubernetes-upgrade-921183
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-921183 -n kubernetes-upgrade-921183: exit status 2 (195.498991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-921183" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-921183" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-921183
--- FAIL: TestKubernetesUpgrade (931.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-824458 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-824458 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.240384281s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-824458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-824458" primary control-plane node in "pause-824458" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-824458" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:42:27.006389   42439 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:42:27.006519   42439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:42:27.006524   42439 out.go:374] Setting ErrFile to fd 2...
	I1210 06:42:27.006528   42439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:42:27.006764   42439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:42:27.007232   42439 out.go:368] Setting JSON to false
	I1210 06:42:27.008243   42439 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5091,"bootTime":1765343856,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:42:27.008307   42439 start.go:143] virtualization: kvm guest
	I1210 06:42:27.010715   42439 out.go:179] * [pause-824458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:42:27.012186   42439 notify.go:221] Checking for updates...
	I1210 06:42:27.012238   42439 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:42:27.013388   42439 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:42:27.014626   42439 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:42:27.016025   42439 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 06:42:27.017369   42439 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:42:27.018655   42439 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:42:27.020423   42439 config.go:182] Loaded profile config "pause-824458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:42:27.021169   42439 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:42:27.067814   42439 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 06:42:27.068862   42439 start.go:309] selected driver: kvm2
	I1210 06:42:27.068891   42439 start.go:927] validating driver "kvm2" against &{Name:pause-824458 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-824458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:42:27.069079   42439 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:42:27.070511   42439 cni.go:84] Creating CNI manager for ""
	I1210 06:42:27.070592   42439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:42:27.070657   42439 start.go:353] cluster config:
	{Name:pause-824458 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-824458 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:42:27.070854   42439 iso.go:125] acquiring lock: {Name:mk873366a783b9f735599145b1cff21faf50318e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:42:27.097837   42439 out.go:179] * Starting "pause-824458" primary control-plane node in "pause-824458" cluster
	I1210 06:42:27.169168   42439 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:42:27.169214   42439 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 06:42:27.169224   42439 cache.go:65] Caching tarball of preloaded images
	I1210 06:42:27.169402   42439 preload.go:238] Found /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:42:27.169431   42439 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 06:42:27.169616   42439 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458/config.json ...
	I1210 06:42:27.169894   42439 start.go:360] acquireMachinesLock for pause-824458: {Name:mkc15d5369b31c34b8a5517a09471706fa3f291a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 06:42:32.166164   42439 start.go:364] duration metric: took 4.996221734s to acquireMachinesLock for "pause-824458"
	I1210 06:42:32.166247   42439 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:42:32.166261   42439 fix.go:54] fixHost starting: 
	I1210 06:42:32.169020   42439 fix.go:112] recreateIfNeeded on pause-824458: state=Running err=<nil>
	W1210 06:42:32.169045   42439 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:42:32.171184   42439 out.go:252] * Updating the running kvm2 "pause-824458" VM ...
	I1210 06:42:32.171217   42439 machine.go:94] provisionDockerMachine start ...
	I1210 06:42:32.175847   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.177159   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:32.177201   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.177695   42439 main.go:143] libmachine: Using SSH client type: native
	I1210 06:42:32.178027   42439 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1210 06:42:32.178047   42439 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:42:32.300093   42439 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-824458
	
	I1210 06:42:32.300132   42439 buildroot.go:166] provisioning hostname "pause-824458"
	I1210 06:42:32.303288   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.303885   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:32.303920   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.304176   42439 main.go:143] libmachine: Using SSH client type: native
	I1210 06:42:32.304470   42439 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1210 06:42:32.304488   42439 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-824458 && echo "pause-824458" | sudo tee /etc/hostname
	I1210 06:42:32.438801   42439 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-824458
	
	I1210 06:42:32.441421   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.441895   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:32.441917   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.442087   42439 main.go:143] libmachine: Using SSH client type: native
	I1210 06:42:32.442337   42439 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1210 06:42:32.442376   42439 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-824458' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-824458/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-824458' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:42:32.556672   42439 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:42:32.556704   42439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8667/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8667/.minikube}
	I1210 06:42:32.556752   42439 buildroot.go:174] setting up certificates
	I1210 06:42:32.556763   42439 provision.go:84] configureAuth start
	I1210 06:42:32.560048   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.560593   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:32.560628   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.563191   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.563521   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:32.563546   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.563722   42439 provision.go:143] copyHostCerts
	I1210 06:42:32.563790   42439 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem, removing ...
	I1210 06:42:32.563808   42439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem
	I1210 06:42:32.563882   42439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem (1082 bytes)
	I1210 06:42:32.563984   42439 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem, removing ...
	I1210 06:42:32.563994   42439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem
	I1210 06:42:32.564025   42439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem (1123 bytes)
	I1210 06:42:32.564083   42439 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem, removing ...
	I1210 06:42:32.564090   42439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem
	I1210 06:42:32.564110   42439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem (1675 bytes)
	I1210 06:42:32.564161   42439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem org=jenkins.pause-824458 san=[127.0.0.1 192.168.39.53 localhost minikube pause-824458]
	I1210 06:42:32.666800   42439 provision.go:177] copyRemoteCerts
	I1210 06:42:32.666885   42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:42:32.669920   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.670350   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:32.670392   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.670587   42439 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/pause-824458/id_rsa Username:docker}
	I1210 06:42:32.765159   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:42:32.805213   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1210 06:42:32.851578   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:42:32.890433   42439 provision.go:87] duration metric: took 333.642537ms to configureAuth
	I1210 06:42:32.890468   42439 buildroot.go:189] setting minikube options for container-runtime
	I1210 06:42:32.890747   42439 config.go:182] Loaded profile config "pause-824458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:42:32.894060   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.894513   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:32.894540   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:32.894739   42439 main.go:143] libmachine: Using SSH client type: native
	I1210 06:42:32.894982   42439 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1210 06:42:32.895007   42439 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:42:38.485492   42439 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:42:38.485521   42439 machine.go:97] duration metric: took 6.31429615s to provisionDockerMachine
	I1210 06:42:38.485535   42439 start.go:293] postStartSetup for "pause-824458" (driver="kvm2")
	I1210 06:42:38.485549   42439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:42:38.485621   42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:42:38.489117   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:38.489517   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:38.489541   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:38.489692   42439 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/pause-824458/id_rsa Username:docker}
	I1210 06:42:38.578846   42439 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:42:38.584587   42439 info.go:137] Remote host: Buildroot 2025.02
	I1210 06:42:38.584616   42439 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/addons for local assets ...
	I1210 06:42:38.584701   42439 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/files for local assets ...
	I1210 06:42:38.584825   42439 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem -> 125882.pem in /etc/ssl/certs
	I1210 06:42:38.584945   42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:42:38.597930   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem --> /etc/ssl/certs/125882.pem (1708 bytes)
	I1210 06:42:38.632348   42439 start.go:296] duration metric: took 146.798064ms for postStartSetup
	I1210 06:42:38.632406   42439 fix.go:56] duration metric: took 6.466137615s for fixHost
	I1210 06:42:38.635652   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:38.636102   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:38.636138   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:38.636373   42439 main.go:143] libmachine: Using SSH client type: native
	I1210 06:42:38.636671   42439 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1210 06:42:38.636686   42439 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 06:42:38.758489   42439 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765348958.755336720
	
	I1210 06:42:38.758518   42439 fix.go:216] guest clock: 1765348958.755336720
	I1210 06:42:38.758528   42439 fix.go:229] Guest: 2025-12-10 06:42:38.75533672 +0000 UTC Remote: 2025-12-10 06:42:38.632410163 +0000 UTC m=+11.693169278 (delta=122.926557ms)
	I1210 06:42:38.758556   42439 fix.go:200] guest clock delta is within tolerance: 122.926557ms
	I1210 06:42:38.758564   42439 start.go:83] releasing machines lock for "pause-824458", held for 6.592369464s
	I1210 06:42:38.762196   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:38.762637   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:38.762668   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:38.763252   42439 ssh_runner.go:195] Run: cat /version.json
	I1210 06:42:38.763334   42439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:42:38.767614   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:38.767913   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:38.768034   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:38.768063   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:38.768262   42439 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/pause-824458/id_rsa Username:docker}
	I1210 06:42:38.768441   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:38.768475   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:38.768653   42439 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/pause-824458/id_rsa Username:docker}
	I1210 06:42:38.852680   42439 ssh_runner.go:195] Run: systemctl --version
	I1210 06:42:38.898943   42439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:42:39.060029   42439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:42:39.073229   42439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:42:39.073316   42439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:42:39.085100   42439 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:42:39.085126   42439 start.go:496] detecting cgroup driver to use...
	I1210 06:42:39.085192   42439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:42:39.109038   42439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:42:39.126879   42439 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:42:39.126932   42439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:42:39.150710   42439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:42:39.168026   42439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:42:39.384139   42439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:42:39.568871   42439 docker.go:234] disabling docker service ...
	I1210 06:42:39.568932   42439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:42:39.606682   42439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:42:39.627594   42439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:42:39.850014   42439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:42:40.062403   42439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:42:40.078407   42439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:42:40.105246   42439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:42:40.105319   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:42:40.119241   42439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:42:40.119330   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:42:40.136561   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:42:40.149969   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:42:40.163739   42439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:42:40.178236   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:42:40.193496   42439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:42:40.206913   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:42:40.220238   42439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:42:40.233931   42439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:42:40.257937   42439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:42:40.488894   42439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:42:47.130564   42439 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.641626089s)
	I1210 06:42:47.130609   42439 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:42:47.130679   42439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:42:47.138866   42439 start.go:564] Will wait 60s for crictl version
	I1210 06:42:47.138925   42439 ssh_runner.go:195] Run: which crictl
	I1210 06:42:47.143721   42439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 06:42:47.178521   42439 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 06:42:47.178623   42439 ssh_runner.go:195] Run: crio --version
	I1210 06:42:47.209339   42439 ssh_runner.go:195] Run: crio --version
	I1210 06:42:47.248138   42439 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1210 06:42:47.252810   42439 main.go:143] libmachine: domain pause-824458 has defined MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:47.253267   42439 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:51:fe:4a", ip: ""} in network mk-pause-824458: {Iface:virbr1 ExpiryTime:2025-12-10 07:41:27 +0000 UTC Type:0 Mac:52:54:00:51:fe:4a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:pause-824458 Clientid:01:52:54:00:51:fe:4a}
	I1210 06:42:47.253303   42439 main.go:143] libmachine: domain pause-824458 has defined IP address 192.168.39.53 and MAC address 52:54:00:51:fe:4a in network mk-pause-824458
	I1210 06:42:47.253536   42439 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 06:42:47.259559   42439 kubeadm.go:884] updating cluster {Name:pause-824458 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-824458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:42:47.259728   42439 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:42:47.259797   42439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:42:47.315074   42439 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:42:47.315100   42439 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:42:47.315160   42439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:42:47.361974   42439 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:42:47.362003   42439 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:42:47.362019   42439 kubeadm.go:935] updating node { 192.168.39.53 8443 v1.34.2 crio true true} ...
	I1210 06:42:47.362152   42439 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-824458 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-824458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:42:47.362228   42439 ssh_runner.go:195] Run: crio config
	I1210 06:42:47.416782   42439 cni.go:84] Creating CNI manager for ""
	I1210 06:42:47.416803   42439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:42:47.416815   42439 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:42:47.416839   42439 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-824458 NodeName:pause-824458 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:42:47.416987   42439 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-824458"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.53"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:42:47.417067   42439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 06:42:47.434029   42439 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:42:47.434100   42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:42:47.451156   42439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1210 06:42:47.476264   42439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:42:47.499370   42439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1210 06:42:47.522266   42439 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I1210 06:42:47.527561   42439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:42:47.719118   42439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:42:47.739390   42439 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458 for IP: 192.168.39.53
	I1210 06:42:47.739417   42439 certs.go:195] generating shared ca certs ...
	I1210 06:42:47.739457   42439 certs.go:227] acquiring lock for ca certs: {Name:mkbf1082c8328cc7c1360f5f8b344958e8aa5792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:42:47.739676   42439 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key
	I1210 06:42:47.739754   42439 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key
	I1210 06:42:47.739767   42439 certs.go:257] generating profile certs ...
	I1210 06:42:47.739896   42439 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458/client.key
	I1210 06:42:47.739978   42439 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458/apiserver.key.47ca95d5
	I1210 06:42:47.740033   42439 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458/proxy-client.key
	I1210 06:42:47.740201   42439 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588.pem (1338 bytes)
	W1210 06:42:47.740244   42439 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588_empty.pem, impossibly tiny 0 bytes
	I1210 06:42:47.740254   42439 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:42:47.740292   42439 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:42:47.740327   42439 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:42:47.740383   42439 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem (1675 bytes)
	I1210 06:42:47.740461   42439 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem (1708 bytes)
	I1210 06:42:47.741287   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:42:47.778723   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:42:47.815986   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:42:47.855263   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:42:47.887132   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 06:42:47.916846   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:42:47.947142   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:42:47.982070   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:42:48.013393   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem --> /usr/share/ca-certificates/125882.pem (1708 bytes)
	I1210 06:42:48.043476   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:42:48.078084   42439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/certs/12588.pem --> /usr/share/ca-certificates/12588.pem (1338 bytes)
	I1210 06:42:48.108774   42439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:42:48.135236   42439 ssh_runner.go:195] Run: openssl version
	I1210 06:42:48.141684   42439 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/125882.pem
	I1210 06:42:48.155527   42439 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/125882.pem /etc/ssl/certs/125882.pem
	I1210 06:42:48.170092   42439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125882.pem
	I1210 06:42:48.177034   42439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:56 /usr/share/ca-certificates/125882.pem
	I1210 06:42:48.177110   42439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125882.pem
	I1210 06:42:48.185120   42439 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:42:48.202489   42439 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:42:48.218281   42439 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:42:48.230784   42439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:42:48.238344   42439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:42:48.238425   42439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:42:48.249086   42439 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:42:48.267925   42439 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12588.pem
	I1210 06:42:48.282338   42439 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12588.pem /etc/ssl/certs/12588.pem
	I1210 06:42:48.305949   42439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12588.pem
	I1210 06:42:48.322095   42439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:56 /usr/share/ca-certificates/12588.pem
	I1210 06:42:48.322174   42439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12588.pem
	I1210 06:42:48.363559   42439 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:42:48.396366   42439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:42:48.404027   42439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:42:48.439685   42439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:42:48.478336   42439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:42:48.489041   42439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:42:48.506411   42439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:42:48.537390   42439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:42:48.567575   42439 kubeadm.go:401] StartCluster: {Name:pause-824458 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-824458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:42:48.567745   42439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:42:48.567819   42439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:42:48.663173   42439 cri.go:89] found id: "a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627"
	I1210 06:42:48.663208   42439 cri.go:89] found id: "9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a"
	I1210 06:42:48.663216   42439 cri.go:89] found id: "a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1"
	I1210 06:42:48.663223   42439 cri.go:89] found id: "21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524"
	I1210 06:42:48.663229   42439 cri.go:89] found id: "8dfe311f92edbd04baf2eaddcbfeb1872eb2630e324018f045e8a09f3aa671d3"
	I1210 06:42:48.663234   42439 cri.go:89] found id: "ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea"
	I1210 06:42:48.663238   42439 cri.go:89] found id: ""
	I1210 06:42:48.663296   42439 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-824458 -n pause-824458
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-824458 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-824458 logs -n 25: (1.387478727s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-579150 sudo docker system info                                                                                                                                                                                │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                               │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl cat cri-docker --no-pager                                                                                                                                                               │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                          │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                    │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo cri-dockerd --version                                                                                                                                                                             │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl status containerd --all --full --no-pager                                                                                                                                               │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl cat containerd --no-pager                                                                                                                                                               │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                        │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo cat /etc/containerd/config.toml                                                                                                                                                                   │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo containerd config dump                                                                                                                                                                            │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl status crio --all --full --no-pager                                                                                                                                                     │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl cat crio --no-pager                                                                                                                                                                     │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                           │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo crio config                                                                                                                                                                                       │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ delete  │ -p cilium-579150                                                                                                                                                                                                        │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │ 10 Dec 25 06:41 UTC │
	│ start   │ -p cert-expiration-096353 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-096353 │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │ 10 Dec 25 06:42 UTC │
	│ start   │ -p pause-824458 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-824458           │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:43 UTC │
	│ start   │ -p NoKubernetes-894399 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                     │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:42 UTC │
	│ delete  │ -p NoKubernetes-894399                                                                                                                                                                                                  │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:42 UTC │
	│ start   │ -p NoKubernetes-894399 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                     │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:43 UTC │
	│ delete  │ -p offline-crio-832745                                                                                                                                                                                                  │ offline-crio-832745    │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:42 UTC │
	│ start   │ -p cert-options-802205 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-802205    │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │                     │
	│ ssh     │ -p NoKubernetes-894399 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:43 UTC │                     │
	│ stop    │ -p NoKubernetes-894399                                                                                                                                                                                                  │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:42:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:42:59.891596   42868 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:42:59.891863   42868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:42:59.891867   42868 out.go:374] Setting ErrFile to fd 2...
	I1210 06:42:59.891870   42868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:42:59.892067   42868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:42:59.892570   42868 out.go:368] Setting JSON to false
	I1210 06:42:59.893469   42868 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5124,"bootTime":1765343856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:42:59.893519   42868 start.go:143] virtualization: kvm guest
	I1210 06:42:59.895398   42868 out.go:179] * [cert-options-802205] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:42:59.896732   42868 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:42:59.896749   42868 notify.go:221] Checking for updates...
	I1210 06:42:59.898956   42868 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:42:59.900345   42868 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:42:59.901474   42868 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 06:42:59.902456   42868 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:42:59.903622   42868 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:42:59.905184   42868 config.go:182] Loaded profile config "NoKubernetes-894399": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1210 06:42:59.905265   42868 config.go:182] Loaded profile config "cert-expiration-096353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:42:59.905395   42868 config.go:182] Loaded profile config "pause-824458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:42:59.905493   42868 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:42:59.944185   42868 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 06:42:59.945300   42868 start.go:309] selected driver: kvm2
	I1210 06:42:59.945313   42868 start.go:927] validating driver "kvm2" against <nil>
	I1210 06:42:59.945323   42868 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:42:59.946176   42868 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:42:59.946428   42868 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 06:42:59.946449   42868 cni.go:84] Creating CNI manager for ""
	I1210 06:42:59.946508   42868 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:42:59.946517   42868 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 06:42:59.946572   42868 start.go:353] cluster config:
	{Name:cert-options-802205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-802205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I1210 06:42:59.946749   42868 iso.go:125] acquiring lock: {Name:mk873366a783b9f735599145b1cff21faf50318e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:42:59.948268   42868 out.go:179] * Starting "cert-options-802205" primary control-plane node in "cert-options-802205" cluster
	I1210 06:42:56.878549   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:42:56.879271   42676 main.go:143] libmachine: no network interface addresses found for domain NoKubernetes-894399 (source=lease)
	I1210 06:42:56.879289   42676 main.go:143] libmachine: trying to list again with source=arp
	I1210 06:42:56.879676   42676 main.go:143] libmachine: unable to find current IP address of domain NoKubernetes-894399 in network mk-NoKubernetes-894399 (interfaces detected: [])
	I1210 06:42:56.879716   42676 retry.go:31] will retry after 1.204844471s: waiting for domain to come up
	I1210 06:42:58.086471   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:42:58.087279   42676 main.go:143] libmachine: no network interface addresses found for domain NoKubernetes-894399 (source=lease)
	I1210 06:42:58.087300   42676 main.go:143] libmachine: trying to list again with source=arp
	I1210 06:42:58.087667   42676 main.go:143] libmachine: unable to find current IP address of domain NoKubernetes-894399 in network mk-NoKubernetes-894399 (interfaces detected: [])
	I1210 06:42:58.087719   42676 retry.go:31] will retry after 1.174947773s: waiting for domain to come up
	I1210 06:42:59.264101   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:42:59.264918   42676 main.go:143] libmachine: no network interface addresses found for domain NoKubernetes-894399 (source=lease)
	I1210 06:42:59.264937   42676 main.go:143] libmachine: trying to list again with source=arp
	I1210 06:42:59.265426   42676 main.go:143] libmachine: unable to find current IP address of domain NoKubernetes-894399 in network mk-NoKubernetes-894399 (interfaces detected: [])
	I1210 06:42:59.265479   42676 retry.go:31] will retry after 2.243629044s: waiting for domain to come up
	I1210 06:42:58.019531   42439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:42:58.020200   42439 addons.go:530] duration metric: took 3.672488ms for enable addons: enabled=[]
	I1210 06:42:58.277670   42439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:42:58.301572   42439 node_ready.go:35] waiting up to 6m0s for node "pause-824458" to be "Ready" ...
	I1210 06:42:58.304691   42439 node_ready.go:49] node "pause-824458" is "Ready"
	I1210 06:42:58.304711   42439 node_ready.go:38] duration metric: took 3.096366ms for node "pause-824458" to be "Ready" ...
	I1210 06:42:58.304722   42439 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:42:58.304767   42439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:58.326727   42439 api_server.go:72] duration metric: took 310.230233ms to wait for apiserver process to appear ...
	I1210 06:42:58.326768   42439 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:42:58.326792   42439 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I1210 06:42:58.333491   42439 api_server.go:279] https://192.168.39.53:8443/healthz returned 200:
	ok
	I1210 06:42:58.334606   42439 api_server.go:141] control plane version: v1.34.2
	I1210 06:42:58.334629   42439 api_server.go:131] duration metric: took 7.854634ms to wait for apiserver health ...
	I1210 06:42:58.334639   42439 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:42:58.337502   42439 system_pods.go:59] 6 kube-system pods found
	I1210 06:42:58.337533   42439 system_pods.go:61] "coredns-66bc5c9577-7glnx" [160b4454-f699-48f2-9f5a-f8f8101f611b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:42:58.337544   42439 system_pods.go:61] "etcd-pause-824458" [70c299bc-2b6b-4cc1-9c7a-bf33883742f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:42:58.337553   42439 system_pods.go:61] "kube-apiserver-pause-824458" [0a69dfc1-4ea0-4f7f-893d-5c0db5b0bb77] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:42:58.337562   42439 system_pods.go:61] "kube-controller-manager-pause-824458" [a06a77fe-6843-4b6a-b5c8-51374835b6b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:42:58.337568   42439 system_pods.go:61] "kube-proxy-fzkpb" [9a52a9b4-a54f-4b7d-9313-9597f8681754] Running
	I1210 06:42:58.337575   42439 system_pods.go:61] "kube-scheduler-pause-824458" [89276f6e-ed4b-4140-81fd-555f30debc8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:42:58.337587   42439 system_pods.go:74] duration metric: took 2.940409ms to wait for pod list to return data ...
	I1210 06:42:58.337597   42439 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:42:58.339816   42439 default_sa.go:45] found service account: "default"
	I1210 06:42:58.339838   42439 default_sa.go:55] duration metric: took 2.235062ms for default service account to be created ...
	I1210 06:42:58.339849   42439 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:42:58.342958   42439 system_pods.go:86] 6 kube-system pods found
	I1210 06:42:58.342980   42439 system_pods.go:89] "coredns-66bc5c9577-7glnx" [160b4454-f699-48f2-9f5a-f8f8101f611b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:42:58.342989   42439 system_pods.go:89] "etcd-pause-824458" [70c299bc-2b6b-4cc1-9c7a-bf33883742f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:42:58.343004   42439 system_pods.go:89] "kube-apiserver-pause-824458" [0a69dfc1-4ea0-4f7f-893d-5c0db5b0bb77] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:42:58.343017   42439 system_pods.go:89] "kube-controller-manager-pause-824458" [a06a77fe-6843-4b6a-b5c8-51374835b6b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:42:58.343026   42439 system_pods.go:89] "kube-proxy-fzkpb" [9a52a9b4-a54f-4b7d-9313-9597f8681754] Running
	I1210 06:42:58.343031   42439 system_pods.go:89] "kube-scheduler-pause-824458" [89276f6e-ed4b-4140-81fd-555f30debc8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:42:58.343037   42439 system_pods.go:126] duration metric: took 3.182316ms to wait for k8s-apps to be running ...
	I1210 06:42:58.343043   42439 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:42:58.343084   42439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:58.360339   42439 system_svc.go:56] duration metric: took 17.285703ms WaitForService to wait for kubelet
	I1210 06:42:58.360375   42439 kubeadm.go:587] duration metric: took 343.886574ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:42:58.360395   42439 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:42:58.364315   42439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 06:42:58.364335   42439 node_conditions.go:123] node cpu capacity is 2
	I1210 06:42:58.364345   42439 node_conditions.go:105] duration metric: took 3.945331ms to run NodePressure ...
	I1210 06:42:58.364372   42439 start.go:242] waiting for startup goroutines ...
	I1210 06:42:58.364379   42439 start.go:247] waiting for cluster config update ...
	I1210 06:42:58.364386   42439 start.go:256] writing updated cluster config ...
	I1210 06:42:58.364657   42439 ssh_runner.go:195] Run: rm -f paused
	I1210 06:42:58.369591   42439 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:42:58.370225   42439 kapi.go:59] client config for pause-824458: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/profiles/pause-824458/client.key", CAFile:"/home/jenkins/minikube-integration/22089-8667/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:42:58.373020   42439 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7glnx" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:43:00.380956   42439 pod_ready.go:104] pod "coredns-66bc5c9577-7glnx" is not "Ready", error: <nil>
	I1210 06:42:59.949324   42868 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:42:59.949347   42868 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 06:42:59.949367   42868 cache.go:65] Caching tarball of preloaded images
	I1210 06:42:59.949474   42868 preload.go:238] Found /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:42:59.949481   42868 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 06:42:59.949575   42868 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/cert-options-802205/config.json ...
	I1210 06:42:59.949589   42868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/cert-options-802205/config.json: {Name:mk7796b19d0d588b88aca2b1a089f3eef4c8cac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:42:59.949717   42868 start.go:360] acquireMachinesLock for cert-options-802205: {Name:mkc15d5369b31c34b8a5517a09471706fa3f291a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 06:43:01.511201   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:01.511898   42676 main.go:143] libmachine: no network interface addresses found for domain NoKubernetes-894399 (source=lease)
	I1210 06:43:01.511912   42676 main.go:143] libmachine: trying to list again with source=arp
	I1210 06:43:01.512303   42676 main.go:143] libmachine: unable to find current IP address of domain NoKubernetes-894399 in network mk-NoKubernetes-894399 (interfaces detected: [])
	I1210 06:43:01.512335   42676 retry.go:31] will retry after 2.193777802s: waiting for domain to come up
	I1210 06:43:03.708916   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:03.709651   42676 main.go:143] libmachine: no network interface addresses found for domain NoKubernetes-894399 (source=lease)
	I1210 06:43:03.709673   42676 main.go:143] libmachine: trying to list again with source=arp
	I1210 06:43:03.710011   42676 main.go:143] libmachine: unable to find current IP address of domain NoKubernetes-894399 in network mk-NoKubernetes-894399 (interfaces detected: [])
	I1210 06:43:03.710048   42676 retry.go:31] will retry after 3.046579495s: waiting for domain to come up
	W1210 06:43:02.887673   42439 pod_ready.go:104] pod "coredns-66bc5c9577-7glnx" is not "Ready", error: <nil>
	I1210 06:43:03.379589   42439 pod_ready.go:94] pod "coredns-66bc5c9577-7glnx" is "Ready"
	I1210 06:43:03.379615   42439 pod_ready.go:86] duration metric: took 5.006576623s for pod "coredns-66bc5c9577-7glnx" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:43:03.382155   42439 pod_ready.go:83] waiting for pod "etcd-pause-824458" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:43:03.388271   42439 pod_ready.go:94] pod "etcd-pause-824458" is "Ready"
	I1210 06:43:03.388295   42439 pod_ready.go:86] duration metric: took 6.117289ms for pod "etcd-pause-824458" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:43:03.390783   42439 pod_ready.go:83] waiting for pod "kube-apiserver-pause-824458" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:43:05.396627   42439 pod_ready.go:104] pod "kube-apiserver-pause-824458" is not "Ready", error: <nil>
	I1210 06:43:08.324732   42868 start.go:364] duration metric: took 8.374975964s to acquireMachinesLock for "cert-options-802205"
	I1210 06:43:08.324778   42868 start.go:93] Provisioning new machine with config: &{Name:cert-options-802205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.34.2 ClusterName:cert-options-802205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:43:08.324904   42868 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 06:43:08.327158   42868 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1210 06:43:08.327402   42868 start.go:159] libmachine.API.Create for "cert-options-802205" (driver="kvm2")
	I1210 06:43:08.327437   42868 client.go:173] LocalClient.Create starting
	I1210 06:43:08.327548   42868 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem
	I1210 06:43:08.327586   42868 main.go:143] libmachine: Decoding PEM data...
	I1210 06:43:08.327603   42868 main.go:143] libmachine: Parsing certificate...
	I1210 06:43:08.327662   42868 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem
	I1210 06:43:08.327681   42868 main.go:143] libmachine: Decoding PEM data...
	I1210 06:43:08.327693   42868 main.go:143] libmachine: Parsing certificate...
	I1210 06:43:08.327981   42868 main.go:143] libmachine: creating domain...
	I1210 06:43:08.327985   42868 main.go:143] libmachine: creating network...
	I1210 06:43:08.329857   42868 main.go:143] libmachine: found existing default network
	I1210 06:43:08.330125   42868 main.go:143] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 06:43:08.331142   42868 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:57:b3:e6} reservation:<nil>}
	I1210 06:43:08.332068   42868 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c482f0}
	I1210 06:43:08.332178   42868 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-cert-options-802205</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 06:43:08.340730   42868 main.go:143] libmachine: creating private network mk-cert-options-802205 192.168.50.0/24...
	I1210 06:43:08.422394   42868 main.go:143] libmachine: private network mk-cert-options-802205 192.168.50.0/24 created
	I1210 06:43:08.422730   42868 main.go:143] libmachine: <network>
	  <name>mk-cert-options-802205</name>
	  <uuid>1a8f1e44-71c2-4298-9a71-c766fd8d5913</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:8d:3e:01'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 06:43:08.422763   42868 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22089-8667/.minikube/machines/cert-options-802205 ...
	I1210 06:43:08.422784   42868 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22089-8667/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 06:43:08.422790   42868 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 06:43:08.422865   42868 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22089-8667/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22089-8667/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1210 06:43:08.650911   42868 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/cert-options-802205/id_rsa...
	I1210 06:43:08.712086   42868 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/cert-options-802205/cert-options-802205.rawdisk...
	I1210 06:43:08.712112   42868 main.go:143] libmachine: Writing magic tar header
	I1210 06:43:08.712132   42868 main.go:143] libmachine: Writing SSH key tar header
	I1210 06:43:08.712217   42868 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22089-8667/.minikube/machines/cert-options-802205 ...
	I1210 06:43:08.712276   42868 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/cert-options-802205
	I1210 06:43:08.712301   42868 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22089-8667/.minikube/machines/cert-options-802205 (perms=drwx------)
	I1210 06:43:08.712313   42868 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22089-8667/.minikube/machines
	I1210 06:43:08.712321   42868 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22089-8667/.minikube/machines (perms=drwxr-xr-x)
	I1210 06:43:08.712330   42868 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 06:43:08.712343   42868 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22089-8667/.minikube (perms=drwxr-xr-x)
	I1210 06:43:08.712350   42868 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22089-8667
	I1210 06:43:08.712372   42868 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22089-8667 (perms=drwxrwxr-x)
	I1210 06:43:08.712383   42868 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1210 06:43:08.712393   42868 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 06:43:08.712403   42868 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1210 06:43:08.712412   42868 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 06:43:08.712430   42868 main.go:143] libmachine: checking permissions on dir: /home
	I1210 06:43:08.712436   42868 main.go:143] libmachine: skipping /home - not owner
	I1210 06:43:08.712439   42868 main.go:143] libmachine: defining domain...
	I1210 06:43:08.713737   42868 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>cert-options-802205</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/cert-options-802205/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/cert-options-802205/cert-options-802205.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-cert-options-802205'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1210 06:43:08.719209   42868 main.go:143] libmachine: domain cert-options-802205 has defined MAC address 52:54:00:38:28:54 in network default
	I1210 06:43:08.720029   42868 main.go:143] libmachine: domain cert-options-802205 has defined MAC address 52:54:00:fd:ab:c7 in network mk-cert-options-802205
	I1210 06:43:08.720039   42868 main.go:143] libmachine: starting domain...
	I1210 06:43:08.720044   42868 main.go:143] libmachine: ensuring networks are active...
	I1210 06:43:08.721036   42868 main.go:143] libmachine: Ensuring network default is active
	I1210 06:43:08.721558   42868 main.go:143] libmachine: Ensuring network mk-cert-options-802205 is active
	I1210 06:43:08.722291   42868 main.go:143] libmachine: getting domain XML...
	I1210 06:43:08.723535   42868 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>cert-options-802205</name>
	  <uuid>ad80783d-4cc8-41a8-9924-dfc36bcf9f8e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/cert-options-802205/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22089-8667/.minikube/machines/cert-options-802205/cert-options-802205.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:fd:ab:c7'/>
	      <source network='mk-cert-options-802205'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:38:28:54'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 06:43:06.758330   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:06.759114   42676 main.go:143] libmachine: domain NoKubernetes-894399 has current primary IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:06.759137   42676 main.go:143] libmachine: found domain IP: 192.168.61.218
	I1210 06:43:06.759165   42676 main.go:143] libmachine: reserving static IP address...
	I1210 06:43:06.759737   42676 main.go:143] libmachine: unable to find host DHCP lease matching {name: "NoKubernetes-894399", mac: "52:54:00:c1:5c:b9", ip: "192.168.61.218"} in network mk-NoKubernetes-894399
	I1210 06:43:07.003732   42676 main.go:143] libmachine: reserved static IP address 192.168.61.218 for domain NoKubernetes-894399
	I1210 06:43:07.003755   42676 main.go:143] libmachine: waiting for SSH...
	I1210 06:43:07.003882   42676 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 06:43:07.007047   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.007527   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:07.007553   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.007760   42676 main.go:143] libmachine: Using SSH client type: native
	I1210 06:43:07.008157   42676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I1210 06:43:07.008175   42676 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 06:43:07.119438   42676 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:43:07.119903   42676 main.go:143] libmachine: domain creation complete
	I1210 06:43:07.121605   42676 machine.go:94] provisionDockerMachine start ...
	I1210 06:43:07.124032   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.124402   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:07.124429   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.124610   42676 main.go:143] libmachine: Using SSH client type: native
	I1210 06:43:07.124818   42676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I1210 06:43:07.124828   42676 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:43:07.236484   42676 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 06:43:07.236510   42676 buildroot.go:166] provisioning hostname "NoKubernetes-894399"
	I1210 06:43:07.240066   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.240667   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:07.240705   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.240969   42676 main.go:143] libmachine: Using SSH client type: native
	I1210 06:43:07.241193   42676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I1210 06:43:07.241209   42676 main.go:143] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-894399 && echo "NoKubernetes-894399" | sudo tee /etc/hostname
	I1210 06:43:07.370549   42676 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-894399
	
	I1210 06:43:07.373904   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.374454   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:07.374485   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.374680   42676 main.go:143] libmachine: Using SSH client type: native
	I1210 06:43:07.374958   42676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I1210 06:43:07.374975   42676 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-894399' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-894399/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-894399' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:43:07.496656   42676 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:43:07.496695   42676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8667/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8667/.minikube}
	I1210 06:43:07.496737   42676 buildroot.go:174] setting up certificates
	I1210 06:43:07.496745   42676 provision.go:84] configureAuth start
	I1210 06:43:07.499983   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.500469   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:07.500522   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.502835   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.503148   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:07.503173   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.503335   42676 provision.go:143] copyHostCerts
	I1210 06:43:07.503382   42676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem
	I1210 06:43:07.503428   42676 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem, removing ...
	I1210 06:43:07.503445   42676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem
	I1210 06:43:07.503524   42676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/ca.pem (1082 bytes)
	I1210 06:43:07.503640   42676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem
	I1210 06:43:07.503662   42676 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem, removing ...
	I1210 06:43:07.503667   42676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem
	I1210 06:43:07.503701   42676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/cert.pem (1123 bytes)
	I1210 06:43:07.503754   42676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem
	I1210 06:43:07.503770   42676 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem, removing ...
	I1210 06:43:07.503776   42676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem
	I1210 06:43:07.503800   42676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8667/.minikube/key.pem (1675 bytes)
	I1210 06:43:07.503856   42676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-894399 san=[127.0.0.1 192.168.61.218 NoKubernetes-894399 localhost minikube]
	I1210 06:43:07.613223   42676 provision.go:177] copyRemoteCerts
	I1210 06:43:07.613283   42676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:43:07.615724   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.616185   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:07.616211   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.616343   42676 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/NoKubernetes-894399/id_rsa Username:docker}
	I1210 06:43:07.702509   42676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:43:07.702576   42676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:43:07.733433   42676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:43:07.733506   42676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:43:07.762448   42676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:43:07.762523   42676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 06:43:07.791605   42676 provision.go:87] duration metric: took 294.837898ms to configureAuth
	I1210 06:43:07.791633   42676 buildroot.go:189] setting minikube options for container-runtime
	I1210 06:43:07.791792   42676 config.go:182] Loaded profile config "NoKubernetes-894399": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1210 06:43:07.794779   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.795249   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:07.795279   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:07.795452   42676 main.go:143] libmachine: Using SSH client type: native
	I1210 06:43:07.795677   42676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I1210 06:43:07.795697   42676 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:43:08.067413   42676 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:43:08.067445   42676 machine.go:97] duration metric: took 945.820659ms to provisionDockerMachine
	I1210 06:43:08.067458   42676 client.go:176] duration metric: took 17.041212479s to LocalClient.Create
	I1210 06:43:08.067480   42676 start.go:167] duration metric: took 17.041266403s to libmachine.API.Create "NoKubernetes-894399"
	I1210 06:43:08.067493   42676 start.go:293] postStartSetup for "NoKubernetes-894399" (driver="kvm2")
	I1210 06:43:08.067504   42676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:43:08.067586   42676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:43:08.070440   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.070839   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:08.070863   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.071025   42676 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/NoKubernetes-894399/id_rsa Username:docker}
	I1210 06:43:08.157600   42676 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:43:08.162742   42676 info.go:137] Remote host: Buildroot 2025.02
	I1210 06:43:08.162776   42676 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/addons for local assets ...
	I1210 06:43:08.162855   42676 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8667/.minikube/files for local assets ...
	I1210 06:43:08.162953   42676 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem -> 125882.pem in /etc/ssl/certs
	I1210 06:43:08.162967   42676 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem -> /etc/ssl/certs/125882.pem
	I1210 06:43:08.163072   42676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:43:08.174948   42676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/ssl/certs/125882.pem --> /etc/ssl/certs/125882.pem (1708 bytes)
	I1210 06:43:08.204987   42676 start.go:296] duration metric: took 137.478153ms for postStartSetup
	I1210 06:43:08.208537   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.209153   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:08.209193   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.209531   42676 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/NoKubernetes-894399/config.json ...
	I1210 06:43:08.209725   42676 start.go:128] duration metric: took 17.184833802s to createHost
	I1210 06:43:08.211879   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.212296   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:08.212314   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.212490   42676 main.go:143] libmachine: Using SSH client type: native
	I1210 06:43:08.212680   42676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I1210 06:43:08.212690   42676 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 06:43:08.324591   42676 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765348988.281912312
	
	I1210 06:43:08.324610   42676 fix.go:216] guest clock: 1765348988.281912312
	I1210 06:43:08.324617   42676 fix.go:229] Guest: 2025-12-10 06:43:08.281912312 +0000 UTC Remote: 2025-12-10 06:43:08.209739245 +0000 UTC m=+17.300165670 (delta=72.173067ms)
	I1210 06:43:08.324631   42676 fix.go:200] guest clock delta is within tolerance: 72.173067ms
	I1210 06:43:08.324635   42676 start.go:83] releasing machines lock for "NoKubernetes-894399", held for 17.299802055s
	I1210 06:43:08.327830   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.328280   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:08.328314   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.328845   42676 ssh_runner.go:195] Run: cat /version.json
	I1210 06:43:08.328944   42676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:43:08.332401   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.332528   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.332894   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:08.332971   42676 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:5c:b9", ip: ""} in network mk-NoKubernetes-894399: {Iface:virbr3 ExpiryTime:2025-12-10 07:43:06 +0000 UTC Type:0 Mac:52:54:00:c1:5c:b9 Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:nokubernetes-894399 Clientid:01:52:54:00:c1:5c:b9}
	I1210 06:43:08.333004   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.333041   42676 main.go:143] libmachine: domain NoKubernetes-894399 has defined IP address 192.168.61.218 and MAC address 52:54:00:c1:5c:b9 in network mk-NoKubernetes-894399
	I1210 06:43:08.333221   42676 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/NoKubernetes-894399/id_rsa Username:docker}
	I1210 06:43:08.333412   42676 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/NoKubernetes-894399/id_rsa Username:docker}
	I1210 06:43:08.415688   42676 ssh_runner.go:195] Run: systemctl --version
	I1210 06:43:08.450569   42676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:43:08.612974   42676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:43:08.619795   42676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:43:08.619888   42676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:43:08.644070   42676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:43:08.644099   42676 start.go:496] detecting cgroup driver to use...
	I1210 06:43:08.644196   42676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:43:08.667764   42676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:43:08.684819   42676 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:43:08.684885   42676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:43:08.702975   42676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:43:08.719723   42676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:43:08.869046   42676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:43:09.085561   42676 docker.go:234] disabling docker service ...
	I1210 06:43:09.085644   42676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:43:09.103877   42676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:43:09.119422   42676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:43:09.289037   42676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:43:09.445859   42676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:43:09.462297   42676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:43:09.485973   42676 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1210 06:43:09.486012   42676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1210 06:43:09.486053   42676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:43:09.499286   42676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:43:09.499389   42676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:43:09.515559   42676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:43:09.528243   42676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:43:09.544128   42676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:43:09.558836   42676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:43:09.570717   42676 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 06:43:09.570771   42676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 06:43:09.595917   42676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:43:09.607915   42676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:43:09.759667   42676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:43:09.886831   42676 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:43:09.886915   42676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:43:09.892654   42676 start.go:564] Will wait 60s for crictl version
	I1210 06:43:09.892719   42676 ssh_runner.go:195] Run: which crictl
	I1210 06:43:09.898075   42676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 06:43:09.940474   42676 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 06:43:09.940550   42676 ssh_runner.go:195] Run: crio --version
	I1210 06:43:09.972146   42676 ssh_runner.go:195] Run: crio --version
	I1210 06:43:10.010735   42676 out.go:179] * Preparing CRI-O 1.29.1 ...
	I1210 06:43:10.012309   42676 ssh_runner.go:195] Run: rm -f paused
	I1210 06:43:10.019279   42676 out.go:179] * Done! minikube is ready without Kubernetes!
	I1210 06:43:10.021917   42676 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	W1210 06:43:07.897536   42439 pod_ready.go:104] pod "kube-apiserver-pause-824458" is not "Ready", error: <nil>
	W1210 06:43:10.397931   42439 pod_ready.go:104] pod "kube-apiserver-pause-824458" is not "Ready", error: <nil>
	I1210 06:43:10.897713   42439 pod_ready.go:94] pod "kube-apiserver-pause-824458" is "Ready"
	I1210 06:43:10.897739   42439 pod_ready.go:86] duration metric: took 7.506937419s for pod "kube-apiserver-pause-824458" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:43:10.900611   42439 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-824458" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:43:10.905167   42439 pod_ready.go:94] pod "kube-controller-manager-pause-824458" is "Ready"
	I1210 06:43:10.905195   42439 pod_ready.go:86] duration metric: took 4.556295ms for pod "kube-controller-manager-pause-824458" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:43:10.907347   42439 pod_ready.go:83] waiting for pod "kube-proxy-fzkpb" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:43:10.910796   42439 pod_ready.go:94] pod "kube-proxy-fzkpb" is "Ready"
	I1210 06:43:10.910815   42439 pod_ready.go:86] duration metric: took 3.43195ms for pod "kube-proxy-fzkpb" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:43:10.913199   42439 pod_ready.go:83] waiting for pod "kube-scheduler-pause-824458" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:43:11.095673   42439 pod_ready.go:94] pod "kube-scheduler-pause-824458" is "Ready"
	I1210 06:43:11.095710   42439 pod_ready.go:86] duration metric: took 182.495044ms for pod "kube-scheduler-pause-824458" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:43:11.095727   42439 pod_ready.go:40] duration metric: took 12.726108929s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:43:11.151329   42439 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:43:11.153344   42439 out.go:179] * Done! kubectl is now configured to use "pause-824458" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.817094741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348991817058176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df3acdc0-a1e6-412a-b884-49e71c2921f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.818144434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d723fbe3-89aa-4976-9faa-2b83305ccaf4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.818291779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d723fbe3-89aa-4976-9faa-2b83305ccaf4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.818527165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:369fca45db110a233a2525c67a25cef1f2e23498616fa7f34bd36606411d1b17,PodSandboxId:17e9b8b5521c8d1107efe6b492176ab5c1bf80492021c2940cf6302233588753,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348976528555017,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebb49a881e3e17dff9f7a0e240b6d7d52968a812c93da2b55b8ba3d682cdbe2,PodSandboxId:d377cdb4422e0c5021f73e0968bfb828bd34bd83bedc28756be30f885ee7e1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348976351568750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13df8d1efc84d6aef8b2d764c6e3283e72dac418fccecae1a55b5cbf3e5730d6,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348972699766751,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d3a0f909ff063be72fdf12614e3a266b9308fa52824b238d7c86c635323fed,PodSandboxId:3ab16ec874c3236cdcb8b3a064f059091aca0047352e3bc803653b41de21003b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3
cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348968904722865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d422e2277bff4635c9f9a581698949beacb8877747403fc91d515859cda070c9,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_CREATED,CreatedAt:1765348968877156419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ba0c875bd7e5548f63e608387f3f09b526490e04cabf2cd1e939aff0a70fdb,PodSandboxId:c0d96aaf7ef304940cd0ca27c5e8e7a0a7f47662c0739adabc3cc639758d8afb,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348968812776829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d41ca68871860cbd06029f97c3e6b3d6d453f97cbed07847be7d4fd85243116c,
PodSandboxId:4da13908474bc994312eb8e4971103b791dd50fa81abf35498ad8e282cd72d62,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348968738509389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627,PodSandboxId:ee19c4293fc5f68510dfafccc03a7d97609a4ca5e9de71a5ecb8f8f5d66dc0d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765348913167169250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a,PodSandboxId:fabf053730f558542e3737713ea6d0699c901e78884675a78146d015b32d4703,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765348912681473254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-
9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524,PodSandboxId:cad0c151ce6d00e6cf1dd951423e1972e1e03af891d8ebc46e300fcb9b5ab924,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765348900993030245,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string
{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1,PodSandboxId:bbb8f8f9be7a468a671dfa8ff418d12e9aafb784bf51d5fddc73f68099253899,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765348901025698393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea,PodSandboxId:c50d98716e268b1f96c1b1dc76df89fde5617f72dfe1fb67ff8004a6ff545fe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765348900925163738,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d723fbe3-89aa-4976-9faa-2b83305ccaf4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.865449415Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c9aa7df-a153-43f9-9fa6-d5a46f0e56ec name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.865551136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c9aa7df-a153-43f9-9fa6-d5a46f0e56ec name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.867238771Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25a7867b-16a3-4f45-a0dc-5ac09bf51b8b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.867609329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348991867586733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25a7867b-16a3-4f45-a0dc-5ac09bf51b8b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.868788902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=432a779a-d635-4d1d-a7ec-c7a6ac2cd082 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.868896755Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=432a779a-d635-4d1d-a7ec-c7a6ac2cd082 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.869331612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:369fca45db110a233a2525c67a25cef1f2e23498616fa7f34bd36606411d1b17,PodSandboxId:17e9b8b5521c8d1107efe6b492176ab5c1bf80492021c2940cf6302233588753,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348976528555017,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebb49a881e3e17dff9f7a0e240b6d7d52968a812c93da2b55b8ba3d682cdbe2,PodSandboxId:d377cdb4422e0c5021f73e0968bfb828bd34bd83bedc28756be30f885ee7e1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348976351568750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13df8d1efc84d6aef8b2d764c6e3283e72dac418fccecae1a55b5cbf3e5730d6,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348972699766751,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d3a0f909ff063be72fdf12614e3a266b9308fa52824b238d7c86c635323fed,PodSandboxId:3ab16ec874c3236cdcb8b3a064f059091aca0047352e3bc803653b41de21003b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3
cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348968904722865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d422e2277bff4635c9f9a581698949beacb8877747403fc91d515859cda070c9,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_CREATED,CreatedAt:1765348968877156419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ba0c875bd7e5548f63e608387f3f09b526490e04cabf2cd1e939aff0a70fdb,PodSandboxId:c0d96aaf7ef304940cd0ca27c5e8e7a0a7f47662c0739adabc3cc639758d8afb,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348968812776829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d41ca68871860cbd06029f97c3e6b3d6d453f97cbed07847be7d4fd85243116c,
PodSandboxId:4da13908474bc994312eb8e4971103b791dd50fa81abf35498ad8e282cd72d62,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348968738509389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627,PodSandboxId:ee19c4293fc5f68510dfafccc03a7d97609a4ca5e9de71a5ecb8f8f5d66dc0d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765348913167169250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a,PodSandboxId:fabf053730f558542e3737713ea6d0699c901e78884675a78146d015b32d4703,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765348912681473254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-
9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524,PodSandboxId:cad0c151ce6d00e6cf1dd951423e1972e1e03af891d8ebc46e300fcb9b5ab924,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765348900993030245,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string
{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1,PodSandboxId:bbb8f8f9be7a468a671dfa8ff418d12e9aafb784bf51d5fddc73f68099253899,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765348901025698393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea,PodSandboxId:c50d98716e268b1f96c1b1dc76df89fde5617f72dfe1fb67ff8004a6ff545fe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765348900925163738,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=432a779a-d635-4d1d-a7ec-c7a6ac2cd082 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.924519194Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db485492-cfc9-43d8-90a4-52dde2044916 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.925099799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db485492-cfc9-43d8-90a4-52dde2044916 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.927657177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0cd97f1d-7de4-4bfb-8de3-e3dff4c6df55 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.928209167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348991928085801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0cd97f1d-7de4-4bfb-8de3-e3dff4c6df55 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.929542739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1a9180d-b7da-4fbd-a4f6-59ecb5cf6580 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.929830165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1a9180d-b7da-4fbd-a4f6-59ecb5cf6580 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.930475903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:369fca45db110a233a2525c67a25cef1f2e23498616fa7f34bd36606411d1b17,PodSandboxId:17e9b8b5521c8d1107efe6b492176ab5c1bf80492021c2940cf6302233588753,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348976528555017,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebb49a881e3e17dff9f7a0e240b6d7d52968a812c93da2b55b8ba3d682cdbe2,PodSandboxId:d377cdb4422e0c5021f73e0968bfb828bd34bd83bedc28756be30f885ee7e1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348976351568750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13df8d1efc84d6aef8b2d764c6e3283e72dac418fccecae1a55b5cbf3e5730d6,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348972699766751,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d3a0f909ff063be72fdf12614e3a266b9308fa52824b238d7c86c635323fed,PodSandboxId:3ab16ec874c3236cdcb8b3a064f059091aca0047352e3bc803653b41de21003b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3
cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348968904722865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d422e2277bff4635c9f9a581698949beacb8877747403fc91d515859cda070c9,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_CREATED,CreatedAt:1765348968877156419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ba0c875bd7e5548f63e608387f3f09b526490e04cabf2cd1e939aff0a70fdb,PodSandboxId:c0d96aaf7ef304940cd0ca27c5e8e7a0a7f47662c0739adabc3cc639758d8afb,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348968812776829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d41ca68871860cbd06029f97c3e6b3d6d453f97cbed07847be7d4fd85243116c,
PodSandboxId:4da13908474bc994312eb8e4971103b791dd50fa81abf35498ad8e282cd72d62,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348968738509389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627,PodSandboxId:ee19c4293fc5f68510dfafccc03a7d97609a4ca5e9de71a5ecb8f8f5d66dc0d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765348913167169250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a,PodSandboxId:fabf053730f558542e3737713ea6d0699c901e78884675a78146d015b32d4703,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765348912681473254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-
9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524,PodSandboxId:cad0c151ce6d00e6cf1dd951423e1972e1e03af891d8ebc46e300fcb9b5ab924,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765348900993030245,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string
{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1,PodSandboxId:bbb8f8f9be7a468a671dfa8ff418d12e9aafb784bf51d5fddc73f68099253899,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765348901025698393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea,PodSandboxId:c50d98716e268b1f96c1b1dc76df89fde5617f72dfe1fb67ff8004a6ff545fe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765348900925163738,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1a9180d-b7da-4fbd-a4f6-59ecb5cf6580 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.986724113Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92535a5a-3360-42db-8383-2971a5bf2817 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.986813104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92535a5a-3360-42db-8383-2971a5bf2817 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.988466179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=438da5bf-e851-40d7-930e-c4b3c0bd8311 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.989452873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348991989383143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=438da5bf-e851-40d7-930e-c4b3c0bd8311 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.990487410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3a48685-614f-4a85-b75c-cbe8bfac0243 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.990549235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3a48685-614f-4a85-b75c-cbe8bfac0243 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:11 pause-824458 crio[2587]: time="2025-12-10 06:43:11.990850943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:369fca45db110a233a2525c67a25cef1f2e23498616fa7f34bd36606411d1b17,PodSandboxId:17e9b8b5521c8d1107efe6b492176ab5c1bf80492021c2940cf6302233588753,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348976528555017,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebb49a881e3e17dff9f7a0e240b6d7d52968a812c93da2b55b8ba3d682cdbe2,PodSandboxId:d377cdb4422e0c5021f73e0968bfb828bd34bd83bedc28756be30f885ee7e1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348976351568750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13df8d1efc84d6aef8b2d764c6e3283e72dac418fccecae1a55b5cbf3e5730d6,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348972699766751,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d3a0f909ff063be72fdf12614e3a266b9308fa52824b238d7c86c635323fed,PodSandboxId:3ab16ec874c3236cdcb8b3a064f059091aca0047352e3bc803653b41de21003b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3
cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348968904722865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d422e2277bff4635c9f9a581698949beacb8877747403fc91d515859cda070c9,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_CREATED,CreatedAt:1765348968877156419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ba0c875bd7e5548f63e608387f3f09b526490e04cabf2cd1e939aff0a70fdb,PodSandboxId:c0d96aaf7ef304940cd0ca27c5e8e7a0a7f47662c0739adabc3cc639758d8afb,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348968812776829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d41ca68871860cbd06029f97c3e6b3d6d453f97cbed07847be7d4fd85243116c,
PodSandboxId:4da13908474bc994312eb8e4971103b791dd50fa81abf35498ad8e282cd72d62,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348968738509389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627,PodSandboxId:ee19c4293fc5f68510dfafccc03a7d97609a4ca5e9de71a5ecb8f8f5d66dc0d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765348913167169250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a,PodSandboxId:fabf053730f558542e3737713ea6d0699c901e78884675a78146d015b32d4703,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765348912681473254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-
9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524,PodSandboxId:cad0c151ce6d00e6cf1dd951423e1972e1e03af891d8ebc46e300fcb9b5ab924,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765348900993030245,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string
{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1,PodSandboxId:bbb8f8f9be7a468a671dfa8ff418d12e9aafb784bf51d5fddc73f68099253899,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765348901025698393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea,PodSandboxId:c50d98716e268b1f96c1b1dc76df89fde5617f72dfe1fb67ff8004a6ff545fe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765348900925163738,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3a48685-614f-4a85-b75c-cbe8bfac0243 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	369fca45db110       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   15 seconds ago       Running             kube-proxy                1                   17e9b8b5521c8       kube-proxy-fzkpb                       kube-system
	6ebb49a881e3e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago       Running             coredns                   1                   d377cdb4422e0       coredns-66bc5c9577-7glnx               kube-system
	13df8d1efc84d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   19 seconds ago       Running             kube-controller-manager   2                   ce378d362cbe1       kube-controller-manager-pause-824458   kube-system
	83d3a0f909ff0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   23 seconds ago       Running             kube-apiserver            1                   3ab16ec874c32       kube-apiserver-pause-824458            kube-system
	d422e2277bff4       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   23 seconds ago       Created             kube-controller-manager   1                   ce378d362cbe1       kube-controller-manager-pause-824458   kube-system
	75ba0c875bd7e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   23 seconds ago       Running             kube-scheduler            1                   c0d96aaf7ef30       kube-scheduler-pause-824458            kube-system
	d41ca68871860       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   23 seconds ago       Running             etcd                      1                   4da13908474bc       etcd-pause-824458                      kube-system
	a0acef2ce8c58       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   ee19c4293fc5f       coredns-66bc5c9577-7glnx               kube-system
	9adafa9471f74       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   About a minute ago   Exited              kube-proxy                0                   fabf053730f55       kube-proxy-fzkpb                       kube-system
	a3e7c9f3faf92       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Exited              kube-apiserver            0                   bbb8f8f9be7a4       kube-apiserver-pause-824458            kube-system
	21faccb8890de       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Exited              etcd                      0                   cad0c151ce6d0       etcd-pause-824458                      kube-system
	ded8e61ceef81       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Exited              kube-scheduler            0                   c50d98716e268       kube-scheduler-pause-824458            kube-system
	
	
	==> coredns [6ebb49a881e3e17dff9f7a0e240b6d7d52968a812c93da2b55b8ba3d682cdbe2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45017 - 11514 "HINFO IN 7603812671637065073.6498163450613470329. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.068405721s
	
	
	==> coredns [a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53026 - 12724 "HINFO IN 1482455045923302839.9136556925152517799. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.11371645s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-824458
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-824458
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=pause-824458
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_41_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:41:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-824458
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:43:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:42:56 +0000   Wed, 10 Dec 2025 06:41:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:42:56 +0000   Wed, 10 Dec 2025 06:41:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:42:56 +0000   Wed, 10 Dec 2025 06:41:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:42:56 +0000   Wed, 10 Dec 2025 06:41:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    pause-824458
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa59474e85ed4bcba91266b382e279d8
	  System UUID:                fa59474e-85ed-4bcb-a912-66b382e279d8
	  Boot ID:                    0335f1d7-610e-4c4d-a37e-08ad192c61e7
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-7glnx                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     80s
	  kube-system                 etcd-pause-824458                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         87s
	  kube-system                 kube-apiserver-pause-824458             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-824458    200m (10%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-fzkpb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-824458             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 78s                kube-proxy       
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientPID     92s (x7 over 92s)  kubelet          Node pause-824458 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node pause-824458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node pause-824458 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s                kubelet          Node pause-824458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s                kubelet          Node pause-824458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s                kubelet          Node pause-824458 status is now: NodeHasSufficientPID
	  Normal  NodeReady                84s                kubelet          Node pause-824458 status is now: NodeReady
	  Normal  RegisteredNode           81s                node-controller  Node pause-824458 event: Registered Node pause-824458 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node pause-824458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node pause-824458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node pause-824458 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                node-controller  Node pause-824458 event: Registered Node pause-824458 in Controller
	
	
	==> dmesg <==
	[Dec10 06:41] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001265] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004510] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.165609] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083790] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097693] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.142647] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.958009] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.024385] kauditd_printk_skb: 189 callbacks suppressed
	[Dec10 06:42] kauditd_printk_skb: 96 callbacks suppressed
	[  +4.703325] kauditd_printk_skb: 207 callbacks suppressed
	[Dec10 06:43] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524] <==
	{"level":"info","ts":"2025-12-10T06:41:58.013605Z","caller":"traceutil/trace.go:172","msg":"trace[176437616] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"718.278643ms","start":"2025-12-10T06:41:57.295313Z","end":"2025-12-10T06:41:58.013592Z","steps":["trace[176437616] 'process raft request'  (duration: 718.16788ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:41:58.013887Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T06:41:57.295295Z","time spent":"718.350366ms","remote":"127.0.0.1:43990","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5265,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/pause-824458\" mod_revision:334 > success:<request_put:<key:\"/registry/minions/pause-824458\" value_size:5227 >> failure:<request_range:<key:\"/registry/minions/pause-824458\" > >"}
	{"level":"warn","ts":"2025-12-10T06:41:58.013624Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T06:41:57.461533Z","time spent":"552.087389ms","remote":"127.0.0.1:44006","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":5651,"request content":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-7glnx\" limit:1 "}
	{"level":"warn","ts":"2025-12-10T06:42:05.298026Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.024395ms","expected-duration":"100ms","prefix":"","request":"header:<ID:348073525556426284 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-4ibnu5lxmlmudm57g4t63v3qpe\" mod_revision:412 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-4ibnu5lxmlmudm57g4t63v3qpe\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-4ibnu5lxmlmudm57g4t63v3qpe\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T06:42:05.298124Z","caller":"traceutil/trace.go:172","msg":"trace[1753507547] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"200.982038ms","start":"2025-12-10T06:42:05.097130Z","end":"2025-12-10T06:42:05.298112Z","steps":["trace[1753507547] 'process raft request'  (duration: 72.82145ms)","trace[1753507547] 'compare'  (duration: 127.723645ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:42:05.613324Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.871556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-7glnx\" limit:1 ","response":"range_response_count:1 size:5628"}
	{"level":"info","ts":"2025-12-10T06:42:05.613409Z","caller":"traceutil/trace.go:172","msg":"trace[2001274150] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-7glnx; range_end:; response_count:1; response_revision:422; }","duration":"150.972787ms","start":"2025-12-10T06:42:05.462421Z","end":"2025-12-10T06:42:05.613393Z","steps":["trace[2001274150] 'range keys from in-memory index tree'  (duration: 150.640289ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:42:33.052494Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T06:42:33.052572Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-824458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.53:2380"],"advertise-client-urls":["https://192.168.39.53:2379"]}
	{"level":"error","ts":"2025-12-10T06:42:33.052688Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T06:42:33.143350Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T06:42:33.143410Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:42:33.143435Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8389b8f6c4f004d4","current-leader-member-id":"8389b8f6c4f004d4"}
	{"level":"info","ts":"2025-12-10T06:42:33.143462Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-10T06:42:33.143493Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-10T06:42:33.143544Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T06:42:33.143607Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T06:42:33.143614Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T06:42:33.143670Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.53:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T06:42:33.143708Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.53:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T06:42:33.144109Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.53:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:42:33.147280Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.53:2380"}
	{"level":"error","ts":"2025-12-10T06:42:33.147355Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.53:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:42:33.147407Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.53:2380"}
	{"level":"info","ts":"2025-12-10T06:42:33.147429Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-824458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.53:2380"],"advertise-client-urls":["https://192.168.39.53:2379"]}
	
	
	==> etcd [d41ca68871860cbd06029f97c3e6b3d6d453f97cbed07847be7d4fd85243116c] <==
	{"level":"warn","ts":"2025-12-10T06:42:54.643837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.664665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.690355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.707831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.731995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.752527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.768382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.799681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.806274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.819647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.844754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.857628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.871636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.881594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.898380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.910354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.950315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.975367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.985354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.996450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:55.013081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:55.067938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:59.503933Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.221104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-12-10T06:42:59.504053Z","caller":"traceutil/trace.go:172","msg":"trace[500911789] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:495; }","duration":"119.367742ms","start":"2025-12-10T06:42:59.384672Z","end":"2025-12-10T06:42:59.504039Z","steps":["trace[500911789] 'range keys from in-memory index tree'  (duration: 119.035572ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:42:59.637521Z","caller":"traceutil/trace.go:172","msg":"trace[402092905] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"124.104016ms","start":"2025-12-10T06:42:59.513400Z","end":"2025-12-10T06:42:59.637504Z","steps":["trace[402092905] 'process raft request'  (duration: 124.006427ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:43:12 up 1 min,  0 users,  load average: 1.29, 0.53, 0.20
	Linux pause-824458 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [83d3a0f909ff063be72fdf12614e3a266b9308fa52824b238d7c86c635323fed] <==
	I1210 06:42:55.988540       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:42:55.988562       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:42:55.991607       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:42:56.000407       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 06:42:56.000487       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 06:42:56.000515       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 06:42:56.000622       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 06:42:56.000675       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:42:56.000709       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 06:42:56.000716       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:42:56.015591       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:42:56.021105       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 06:42:56.021239       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:42:56.022709       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:42:56.032089       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:42:56.033502       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:42:56.083331       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:42:56.804531       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:42:57.817355       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:42:57.873766       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:42:57.907410       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:42:57.917855       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:42:59.509665       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:42:59.512779       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:42:59.643880       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1] <==
	W1210 06:42:33.086666       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.086728       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.086783       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.086837       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.086928       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087290       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087459       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087516       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087562       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087602       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087641       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087691       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087806       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087890       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.088699       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.089505       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090004       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090415       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090584       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090644       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090712       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090770       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090824       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090884       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090944       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [13df8d1efc84d6aef8b2d764c6e3283e72dac418fccecae1a55b5cbf3e5730d6] <==
	I1210 06:42:59.347120       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 06:42:59.349324       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 06:42:59.351628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 06:42:59.353923       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:42:59.354005       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 06:42:59.358460       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:42:59.358506       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:42:59.358513       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:42:59.359077       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:42:59.363587       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:42:59.364965       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:42:59.374095       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 06:42:59.375417       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 06:42:59.376049       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 06:42:59.378937       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 06:42:59.380278       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 06:42:59.380522       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:42:59.380801       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:42:59.380890       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:42:59.385916       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:42:59.386008       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 06:42:59.387398       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 06:42:59.387401       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:42:59.391681       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:42:59.391697       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [d422e2277bff4635c9f9a581698949beacb8877747403fc91d515859cda070c9] <==
	
	
	==> kube-proxy [369fca45db110a233a2525c67a25cef1f2e23498616fa7f34bd36606411d1b17] <==
	I1210 06:42:56.783164       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:42:56.883816       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:42:56.883914       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.53"]
	E1210 06:42:56.884153       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:42:56.946956       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 06:42:56.947057       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 06:42:56.947128       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:42:56.964514       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:42:56.965221       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:42:56.965979       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:42:56.967767       1 config.go:200] "Starting service config controller"
	I1210 06:42:56.973775       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:42:56.970974       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:42:56.975898       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:42:56.975941       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:42:56.970989       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:42:56.975964       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:42:56.972746       1 config.go:309] "Starting node config controller"
	I1210 06:42:56.975982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:42:56.975989       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:42:57.074741       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:42:57.077272       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a] <==
	I1210 06:41:53.134551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:41:53.239466       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:41:53.239517       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.53"]
	E1210 06:41:53.239650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:41:53.330066       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 06:41:53.330157       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 06:41:53.330272       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:41:53.340936       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:41:53.341458       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:41:53.341484       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:41:53.348674       1 config.go:200] "Starting service config controller"
	I1210 06:41:53.348840       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:41:53.348935       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:41:53.350894       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:41:53.348947       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:41:53.351021       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:41:53.349459       1 config.go:309] "Starting node config controller"
	I1210 06:41:53.351121       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:41:53.351129       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:41:53.451895       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:41:53.451941       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:41:53.451962       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [75ba0c875bd7e5548f63e608387f3f09b526490e04cabf2cd1e939aff0a70fdb] <==
	I1210 06:42:55.432609       1 serving.go:386] Generated self-signed cert in-memory
	I1210 06:42:56.086849       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 06:42:56.086967       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:42:56.097383       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:42:56.097392       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 06:42:56.097414       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1210 06:42:56.097458       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:42:56.097460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:42:56.097465       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:42:56.097484       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:42:56.097490       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:42:56.197736       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1210 06:42:56.197917       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:42:56.198297       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea] <==
	E1210 06:41:44.112123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:41:44.112258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:41:44.110604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:41:44.112586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 06:41:44.112714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:41:44.933762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 06:41:44.993664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:41:44.994040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:41:45.111778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:41:45.112383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:41:45.138287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:41:45.138376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:41:45.191921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:41:45.231723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:41:45.262780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:41:45.267021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 06:41:45.353583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:41:45.379136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:41:45.394404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:41:45.411911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1210 06:41:47.495867       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:42:33.063926       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 06:42:33.067777       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 06:42:33.067817       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1210 06:42:33.067848       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 10 06:42:53 pause-824458 kubelet[3276]: E1210 06:42:53.161987    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:53 pause-824458 kubelet[3276]: E1210 06:42:53.170743    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:53 pause-824458 kubelet[3276]: E1210 06:42:53.175694    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:53 pause-824458 kubelet[3276]: E1210 06:42:53.176509    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:53 pause-824458 kubelet[3276]: I1210 06:42:53.670220    3276 kubelet_node_status.go:75] "Attempting to register node" node="pause-824458"
	Dec 10 06:42:54 pause-824458 kubelet[3276]: E1210 06:42:54.182803    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:54 pause-824458 kubelet[3276]: E1210 06:42:54.183346    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:54 pause-824458 kubelet[3276]: E1210 06:42:54.183447    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:54 pause-824458 kubelet[3276]: E1210 06:42:54.183827    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:55 pause-824458 kubelet[3276]: E1210 06:42:55.190699    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:55 pause-824458 kubelet[3276]: E1210 06:42:55.191116    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:55 pause-824458 kubelet[3276]: E1210 06:42:55.248422    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:55 pause-824458 kubelet[3276]: I1210 06:42:55.979438    3276 apiserver.go:52] "Watching apiserver"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.013160    3276 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.047460    3276 kubelet_node_status.go:124] "Node was previously registered" node="pause-824458"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.047565    3276 kubelet_node_status.go:78] "Successfully registered node" node="pause-824458"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.047587    3276 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.049802    3276 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.079256    3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a52a9b4-a54f-4b7d-9313-9597f8681754-lib-modules\") pod \"kube-proxy-fzkpb\" (UID: \"9a52a9b4-a54f-4b7d-9313-9597f8681754\") " pod="kube-system/kube-proxy-fzkpb"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.080022    3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a52a9b4-a54f-4b7d-9313-9597f8681754-xtables-lock\") pod \"kube-proxy-fzkpb\" (UID: \"9a52a9b4-a54f-4b7d-9313-9597f8681754\") " pod="kube-system/kube-proxy-fzkpb"
	Dec 10 06:43:02 pause-824458 kubelet[3276]: E1210 06:43:02.164274    3276 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765348982163977590 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 10 06:43:02 pause-824458 kubelet[3276]: E1210 06:43:02.164296    3276 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765348982163977590 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 10 06:43:02 pause-824458 kubelet[3276]: I1210 06:43:02.862088    3276 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 06:43:12 pause-824458 kubelet[3276]: E1210 06:43:12.167336    3276 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765348992166708542 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 10 06:43:12 pause-824458 kubelet[3276]: E1210 06:43:12.167377    3276 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765348992166708542 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-824458 -n pause-824458
helpers_test.go:270: (dbg) Run:  kubectl --context pause-824458 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-824458 -n pause-824458
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-824458 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-824458 logs -n 25: (1.213270459s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-579150 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                               │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl cat cri-docker --no-pager                                                                                                                                                               │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                          │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                    │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo cri-dockerd --version                                                                                                                                                                             │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl status containerd --all --full --no-pager                                                                                                                                               │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl cat containerd --no-pager                                                                                                                                                               │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                        │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo cat /etc/containerd/config.toml                                                                                                                                                                   │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo containerd config dump                                                                                                                                                                            │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl status crio --all --full --no-pager                                                                                                                                                     │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo systemctl cat crio --no-pager                                                                                                                                                                     │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                           │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ ssh     │ -p cilium-579150 sudo crio config                                                                                                                                                                                       │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │                     │
	│ delete  │ -p cilium-579150                                                                                                                                                                                                        │ cilium-579150          │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │ 10 Dec 25 06:41 UTC │
	│ start   │ -p cert-expiration-096353 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-096353 │ jenkins │ v1.37.0 │ 10 Dec 25 06:41 UTC │ 10 Dec 25 06:42 UTC │
	│ start   │ -p pause-824458 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-824458           │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:43 UTC │
	│ start   │ -p NoKubernetes-894399 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                     │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:42 UTC │
	│ delete  │ -p NoKubernetes-894399                                                                                                                                                                                                  │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:42 UTC │
	│ start   │ -p NoKubernetes-894399 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                     │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:43 UTC │
	│ delete  │ -p offline-crio-832745                                                                                                                                                                                                  │ offline-crio-832745    │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:42 UTC │
	│ start   │ -p cert-options-802205 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-802205    │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │                     │
	│ ssh     │ -p NoKubernetes-894399 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:43 UTC │                     │
	│ stop    │ -p NoKubernetes-894399                                                                                                                                                                                                  │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:43 UTC │ 10 Dec 25 06:43 UTC │
	│ start   │ -p NoKubernetes-894399 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-894399    │ jenkins │ v1.37.0 │ 10 Dec 25 06:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:43:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:43:12.865296   43201 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:43:12.865635   43201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:43:12.865658   43201 out.go:374] Setting ErrFile to fd 2...
	I1210 06:43:12.865666   43201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:43:12.866010   43201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:43:12.866576   43201 out.go:368] Setting JSON to false
	I1210 06:43:12.867800   43201 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5137,"bootTime":1765343856,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:43:12.867865   43201 start.go:143] virtualization: kvm guest
	I1210 06:43:12.870006   43201 out.go:179] * [NoKubernetes-894399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:43:12.871842   43201 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:43:12.871917   43201 notify.go:221] Checking for updates...
	I1210 06:43:12.874777   43201 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:43:12.876094   43201 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:43:12.877750   43201 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 06:43:12.878974   43201 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:43:12.880641   43201 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:43:12.882192   43201 config.go:182] Loaded profile config "NoKubernetes-894399": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1210 06:43:12.882898   43201 start.go:1806] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1210 06:43:12.882921   43201 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:43:12.929033   43201 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 06:43:12.930136   43201 start.go:309] selected driver: kvm2
	I1210 06:43:12.930146   43201 start.go:927] validating driver "kvm2" against &{Name:NoKubernetes-894399 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-894399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:43:12.930293   43201 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:43:12.931582   43201 cni.go:84] Creating CNI manager for ""
	I1210 06:43:12.931647   43201 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 06:43:12.931711   43201 start.go:353] cluster config:
	{Name:NoKubernetes-894399 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-894399 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:43:12.931817   43201 iso.go:125] acquiring lock: {Name:mk873366a783b9f735599145b1cff21faf50318e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:43:12.934113   43201 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-894399
	
	
	==> CRI-O <==
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.734357359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348993734328165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e06e6472-86ec-4330-bf65-4aafa2e122ea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.735283287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8d6b115-f153-40bf-b48d-7a7600a8c8fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.735341993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8d6b115-f153-40bf-b48d-7a7600a8c8fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.735610068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:369fca45db110a233a2525c67a25cef1f2e23498616fa7f34bd36606411d1b17,PodSandboxId:17e9b8b5521c8d1107efe6b492176ab5c1bf80492021c2940cf6302233588753,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348976528555017,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebb49a881e3e17dff9f7a0e240b6d7d52968a812c93da2b55b8ba3d682cdbe2,PodSandboxId:d377cdb4422e0c5021f73e0968bfb828bd34bd83bedc28756be30f885ee7e1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348976351568750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13df8d1efc84d6aef8b2d764c6e3283e72dac418fccecae1a55b5cbf3e5730d6,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348972699766751,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d3a0f909ff063be72fdf12614e3a266b9308fa52824b238d7c86c635323fed,PodSandboxId:3ab16ec874c3236cdcb8b3a064f059091aca0047352e3bc803653b41de21003b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3
cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348968904722865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d422e2277bff4635c9f9a581698949beacb8877747403fc91d515859cda070c9,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_CREATED,CreatedAt:1765348968877156419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ba0c875bd7e5548f63e608387f3f09b526490e04cabf2cd1e939aff0a70fdb,PodSandboxId:c0d96aaf7ef304940cd0ca27c5e8e7a0a7f47662c0739adabc3cc639758d8afb,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348968812776829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d41ca68871860cbd06029f97c3e6b3d6d453f97cbed07847be7d4fd85243116c,
PodSandboxId:4da13908474bc994312eb8e4971103b791dd50fa81abf35498ad8e282cd72d62,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348968738509389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627,PodSandboxId:ee19c4293fc5f68510dfafccc03a7d97609a4ca5e9de71a5ecb8f8f5d66dc0d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765348913167169250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a,PodSandboxId:fabf053730f558542e3737713ea6d0699c901e78884675a78146d015b32d4703,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765348912681473254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-
9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524,PodSandboxId:cad0c151ce6d00e6cf1dd951423e1972e1e03af891d8ebc46e300fcb9b5ab924,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765348900993030245,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string
{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1,PodSandboxId:bbb8f8f9be7a468a671dfa8ff418d12e9aafb784bf51d5fddc73f68099253899,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765348901025698393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea,PodSandboxId:c50d98716e268b1f96c1b1dc76df89fde5617f72dfe1fb67ff8004a6ff545fe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765348900925163738,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8d6b115-f153-40bf-b48d-7a7600a8c8fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.781537238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39adb166-7e62-48e3-805a-dfdf2a871388 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.781612258Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39adb166-7e62-48e3-805a-dfdf2a871388 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.784302165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69eabd90-afc9-4667-a35e-58056664acd5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.785716906Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348993785626312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69eabd90-afc9-4667-a35e-58056664acd5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.788096463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b24db23-52c2-4c0d-adff-6fbf3503ccd9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.788392258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b24db23-52c2-4c0d-adff-6fbf3503ccd9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.788740741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:369fca45db110a233a2525c67a25cef1f2e23498616fa7f34bd36606411d1b17,PodSandboxId:17e9b8b5521c8d1107efe6b492176ab5c1bf80492021c2940cf6302233588753,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348976528555017,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebb49a881e3e17dff9f7a0e240b6d7d52968a812c93da2b55b8ba3d682cdbe2,PodSandboxId:d377cdb4422e0c5021f73e0968bfb828bd34bd83bedc28756be30f885ee7e1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348976351568750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13df8d1efc84d6aef8b2d764c6e3283e72dac418fccecae1a55b5cbf3e5730d6,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348972699766751,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d3a0f909ff063be72fdf12614e3a266b9308fa52824b238d7c86c635323fed,PodSandboxId:3ab16ec874c3236cdcb8b3a064f059091aca0047352e3bc803653b41de21003b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3
cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348968904722865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d422e2277bff4635c9f9a581698949beacb8877747403fc91d515859cda070c9,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_CREATED,CreatedAt:1765348968877156419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ba0c875bd7e5548f63e608387f3f09b526490e04cabf2cd1e939aff0a70fdb,PodSandboxId:c0d96aaf7ef304940cd0ca27c5e8e7a0a7f47662c0739adabc3cc639758d8afb,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348968812776829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d41ca68871860cbd06029f97c3e6b3d6d453f97cbed07847be7d4fd85243116c,
PodSandboxId:4da13908474bc994312eb8e4971103b791dd50fa81abf35498ad8e282cd72d62,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348968738509389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627,PodSandboxId:ee19c4293fc5f68510dfafccc03a7d97609a4ca5e9de71a5ecb8f8f5d66dc0d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765348913167169250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a,PodSandboxId:fabf053730f558542e3737713ea6d0699c901e78884675a78146d015b32d4703,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765348912681473254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-
9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524,PodSandboxId:cad0c151ce6d00e6cf1dd951423e1972e1e03af891d8ebc46e300fcb9b5ab924,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765348900993030245,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string
{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1,PodSandboxId:bbb8f8f9be7a468a671dfa8ff418d12e9aafb784bf51d5fddc73f68099253899,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765348901025698393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea,PodSandboxId:c50d98716e268b1f96c1b1dc76df89fde5617f72dfe1fb67ff8004a6ff545fe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765348900925163738,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b24db23-52c2-4c0d-adff-6fbf3503ccd9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.831121069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c522adff-f2c5-43b9-85f6-1bb1a96ee441 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.831359967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c522adff-f2c5-43b9-85f6-1bb1a96ee441 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.833368159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=854eca9f-bced-45fc-b95e-44ced85e103f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.833777534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348993833754581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=854eca9f-bced-45fc-b95e-44ced85e103f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.834699534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bae3940b-1f1e-4e31-b6bf-83eb4c708196 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.834759747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bae3940b-1f1e-4e31-b6bf-83eb4c708196 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.835040572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:369fca45db110a233a2525c67a25cef1f2e23498616fa7f34bd36606411d1b17,PodSandboxId:17e9b8b5521c8d1107efe6b492176ab5c1bf80492021c2940cf6302233588753,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348976528555017,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebb49a881e3e17dff9f7a0e240b6d7d52968a812c93da2b55b8ba3d682cdbe2,PodSandboxId:d377cdb4422e0c5021f73e0968bfb828bd34bd83bedc28756be30f885ee7e1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348976351568750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13df8d1efc84d6aef8b2d764c6e3283e72dac418fccecae1a55b5cbf3e5730d6,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348972699766751,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d3a0f909ff063be72fdf12614e3a266b9308fa52824b238d7c86c635323fed,PodSandboxId:3ab16ec874c3236cdcb8b3a064f059091aca0047352e3bc803653b41de21003b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3
cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348968904722865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d422e2277bff4635c9f9a581698949beacb8877747403fc91d515859cda070c9,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_CREATED,CreatedAt:1765348968877156419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ba0c875bd7e5548f63e608387f3f09b526490e04cabf2cd1e939aff0a70fdb,PodSandboxId:c0d96aaf7ef304940cd0ca27c5e8e7a0a7f47662c0739adabc3cc639758d8afb,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348968812776829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d41ca68871860cbd06029f97c3e6b3d6d453f97cbed07847be7d4fd85243116c,
PodSandboxId:4da13908474bc994312eb8e4971103b791dd50fa81abf35498ad8e282cd72d62,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348968738509389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627,PodSandboxId:ee19c4293fc5f68510dfafccc03a7d97609a4ca5e9de71a5ecb8f8f5d66dc0d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765348913167169250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a,PodSandboxId:fabf053730f558542e3737713ea6d0699c901e78884675a78146d015b32d4703,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765348912681473254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-
9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524,PodSandboxId:cad0c151ce6d00e6cf1dd951423e1972e1e03af891d8ebc46e300fcb9b5ab924,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765348900993030245,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string
{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1,PodSandboxId:bbb8f8f9be7a468a671dfa8ff418d12e9aafb784bf51d5fddc73f68099253899,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765348901025698393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea,PodSandboxId:c50d98716e268b1f96c1b1dc76df89fde5617f72dfe1fb67ff8004a6ff545fe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765348900925163738,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bae3940b-1f1e-4e31-b6bf-83eb4c708196 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.873956907Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6eb3cc9-6935-4dce-87f0-89aad06dd425 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.874050727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6eb3cc9-6935-4dce-87f0-89aad06dd425 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.875402759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f25db004-755b-42e9-bb96-475c0eb7db26 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.875763613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765348993875739725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f25db004-755b-42e9-bb96-475c0eb7db26 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.876709874Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=340f8672-52df-4177-8c41-1e88ab346f4f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.876779786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=340f8672-52df-4177-8c41-1e88ab346f4f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:43:13 pause-824458 crio[2587]: time="2025-12-10 06:43:13.877020631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:369fca45db110a233a2525c67a25cef1f2e23498616fa7f34bd36606411d1b17,PodSandboxId:17e9b8b5521c8d1107efe6b492176ab5c1bf80492021c2940cf6302233588753,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765348976528555017,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebb49a881e3e17dff9f7a0e240b6d7d52968a812c93da2b55b8ba3d682cdbe2,PodSandboxId:d377cdb4422e0c5021f73e0968bfb828bd34bd83bedc28756be30f885ee7e1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765348976351568750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13df8d1efc84d6aef8b2d764c6e3283e72dac418fccecae1a55b5cbf3e5730d6,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765348972699766751,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d3a0f909ff063be72fdf12614e3a266b9308fa52824b238d7c86c635323fed,PodSandboxId:3ab16ec874c3236cdcb8b3a064f059091aca0047352e3bc803653b41de21003b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3
cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765348968904722865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d422e2277bff4635c9f9a581698949beacb8877747403fc91d515859cda070c9,PodSandboxId:ce378d362cbe168bf5a19510594c9c9b5b3e698812fea51e4a62302e6d6abe06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_CREATED,CreatedAt:1765348968877156419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eaa4669b5a53353f9302ab988ab56ad,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ba0c875bd7e5548f63e608387f3f09b526490e04cabf2cd1e939aff0a70fdb,PodSandboxId:c0d96aaf7ef304940cd0ca27c5e8e7a0a7f47662c0739adabc3cc639758d8afb,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765348968812776829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d41ca68871860cbd06029f97c3e6b3d6d453f97cbed07847be7d4fd85243116c,
PodSandboxId:4da13908474bc994312eb8e4971103b791dd50fa81abf35498ad8e282cd72d62,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765348968738509389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627,PodSandboxId:ee19c4293fc5f68510dfafccc03a7d97609a4ca5e9de71a5ecb8f8f5d66dc0d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765348913167169250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7glnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160b4454-f699-48f2-9f5a-f8f8101f611b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a,PodSandboxId:fabf053730f558542e3737713ea6d0699c901e78884675a78146d015b32d4703,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765348912681473254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzkpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a52a9b4-a54f-4b7d-
9313-9597f8681754,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524,PodSandboxId:cad0c151ce6d00e6cf1dd951423e1972e1e03af891d8ebc46e300fcb9b5ab924,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765348900993030245,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4496c154464d7b19fa4222da4dabcf8e,},Annotations:map[string]string
{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1,PodSandboxId:bbb8f8f9be7a468a671dfa8ff418d12e9aafb784bf51d5fddc73f68099253899,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765348901025698393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-824458,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c94cee1c21f8fe8705ac5670a9362cc1,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea,PodSandboxId:c50d98716e268b1f96c1b1dc76df89fde5617f72dfe1fb67ff8004a6ff545fe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765348900925163738,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-824458,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf59c3483e92a1be81db472fe81226,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=340f8672-52df-4177-8c41-1e88ab346f4f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	369fca45db110       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   17 seconds ago       Running             kube-proxy                1                   17e9b8b5521c8       kube-proxy-fzkpb                       kube-system
	6ebb49a881e3e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago       Running             coredns                   1                   d377cdb4422e0       coredns-66bc5c9577-7glnx               kube-system
	13df8d1efc84d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   21 seconds ago       Running             kube-controller-manager   2                   ce378d362cbe1       kube-controller-manager-pause-824458   kube-system
	83d3a0f909ff0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   25 seconds ago       Running             kube-apiserver            1                   3ab16ec874c32       kube-apiserver-pause-824458            kube-system
	d422e2277bff4       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   25 seconds ago       Created             kube-controller-manager   1                   ce378d362cbe1       kube-controller-manager-pause-824458   kube-system
	75ba0c875bd7e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   25 seconds ago       Running             kube-scheduler            1                   c0d96aaf7ef30       kube-scheduler-pause-824458            kube-system
	d41ca68871860       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   25 seconds ago       Running             etcd                      1                   4da13908474bc       etcd-pause-824458                      kube-system
	a0acef2ce8c58       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   ee19c4293fc5f       coredns-66bc5c9577-7glnx               kube-system
	9adafa9471f74       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   About a minute ago   Exited              kube-proxy                0                   fabf053730f55       kube-proxy-fzkpb                       kube-system
	a3e7c9f3faf92       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Exited              kube-apiserver            0                   bbb8f8f9be7a4       kube-apiserver-pause-824458            kube-system
	21faccb8890de       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Exited              etcd                      0                   cad0c151ce6d0       etcd-pause-824458                      kube-system
	ded8e61ceef81       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Exited              kube-scheduler            0                   c50d98716e268       kube-scheduler-pause-824458            kube-system
	
	
	==> coredns [6ebb49a881e3e17dff9f7a0e240b6d7d52968a812c93da2b55b8ba3d682cdbe2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45017 - 11514 "HINFO IN 7603812671637065073.6498163450613470329. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.068405721s
	
	
	==> coredns [a0acef2ce8c5800655b17314f896ccca3ad4436258a7d5dc9e093db90b15a627] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53026 - 12724 "HINFO IN 1482455045923302839.9136556925152517799. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.11371645s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-824458
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-824458
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=pause-824458
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_41_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:41:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-824458
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:43:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:42:56 +0000   Wed, 10 Dec 2025 06:41:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:42:56 +0000   Wed, 10 Dec 2025 06:41:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:42:56 +0000   Wed, 10 Dec 2025 06:41:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:42:56 +0000   Wed, 10 Dec 2025 06:41:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    pause-824458
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa59474e85ed4bcba91266b382e279d8
	  System UUID:                fa59474e-85ed-4bcb-a912-66b382e279d8
	  Boot ID:                    0335f1d7-610e-4c4d-a37e-08ad192c61e7
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-7glnx                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     82s
	  kube-system                 etcd-pause-824458                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         89s
	  kube-system                 kube-apiserver-pause-824458             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-824458    200m (10%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-fzkpb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-pause-824458             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     94s (x7 over 94s)  kubelet          Node pause-824458 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    94s (x8 over 94s)  kubelet          Node pause-824458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  94s (x8 over 94s)  kubelet          Node pause-824458 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node pause-824458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node pause-824458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s                kubelet          Node pause-824458 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s                kubelet          Node pause-824458 status is now: NodeReady
	  Normal  RegisteredNode           83s                node-controller  Node pause-824458 event: Registered Node pause-824458 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-824458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-824458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-824458 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-824458 event: Registered Node pause-824458 in Controller
	
	
	==> dmesg <==
	[Dec10 06:41] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001265] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004510] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.165609] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083790] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097693] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.142647] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.958009] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.024385] kauditd_printk_skb: 189 callbacks suppressed
	[Dec10 06:42] kauditd_printk_skb: 96 callbacks suppressed
	[  +4.703325] kauditd_printk_skb: 207 callbacks suppressed
	[Dec10 06:43] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [21faccb8890def1ffd12e86a00661d4c8528145a880850f2eb5e280118b3d524] <==
	{"level":"info","ts":"2025-12-10T06:41:58.013605Z","caller":"traceutil/trace.go:172","msg":"trace[176437616] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"718.278643ms","start":"2025-12-10T06:41:57.295313Z","end":"2025-12-10T06:41:58.013592Z","steps":["trace[176437616] 'process raft request'  (duration: 718.16788ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:41:58.013887Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T06:41:57.295295Z","time spent":"718.350366ms","remote":"127.0.0.1:43990","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5265,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/pause-824458\" mod_revision:334 > success:<request_put:<key:\"/registry/minions/pause-824458\" value_size:5227 >> failure:<request_range:<key:\"/registry/minions/pause-824458\" > >"}
	{"level":"warn","ts":"2025-12-10T06:41:58.013624Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T06:41:57.461533Z","time spent":"552.087389ms","remote":"127.0.0.1:44006","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":5651,"request content":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-7glnx\" limit:1 "}
	{"level":"warn","ts":"2025-12-10T06:42:05.298026Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.024395ms","expected-duration":"100ms","prefix":"","request":"header:<ID:348073525556426284 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-4ibnu5lxmlmudm57g4t63v3qpe\" mod_revision:412 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-4ibnu5lxmlmudm57g4t63v3qpe\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-4ibnu5lxmlmudm57g4t63v3qpe\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T06:42:05.298124Z","caller":"traceutil/trace.go:172","msg":"trace[1753507547] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"200.982038ms","start":"2025-12-10T06:42:05.097130Z","end":"2025-12-10T06:42:05.298112Z","steps":["trace[1753507547] 'process raft request'  (duration: 72.82145ms)","trace[1753507547] 'compare'  (duration: 127.723645ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:42:05.613324Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.871556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-7glnx\" limit:1 ","response":"range_response_count:1 size:5628"}
	{"level":"info","ts":"2025-12-10T06:42:05.613409Z","caller":"traceutil/trace.go:172","msg":"trace[2001274150] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-7glnx; range_end:; response_count:1; response_revision:422; }","duration":"150.972787ms","start":"2025-12-10T06:42:05.462421Z","end":"2025-12-10T06:42:05.613393Z","steps":["trace[2001274150] 'range keys from in-memory index tree'  (duration: 150.640289ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:42:33.052494Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T06:42:33.052572Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-824458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.53:2380"],"advertise-client-urls":["https://192.168.39.53:2379"]}
	{"level":"error","ts":"2025-12-10T06:42:33.052688Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T06:42:33.143350Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T06:42:33.143410Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:42:33.143435Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8389b8f6c4f004d4","current-leader-member-id":"8389b8f6c4f004d4"}
	{"level":"info","ts":"2025-12-10T06:42:33.143462Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-10T06:42:33.143493Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-10T06:42:33.143544Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T06:42:33.143607Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T06:42:33.143614Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T06:42:33.143670Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.53:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T06:42:33.143708Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.53:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T06:42:33.144109Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.53:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:42:33.147280Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.53:2380"}
	{"level":"error","ts":"2025-12-10T06:42:33.147355Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.53:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:42:33.147407Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.53:2380"}
	{"level":"info","ts":"2025-12-10T06:42:33.147429Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-824458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.53:2380"],"advertise-client-urls":["https://192.168.39.53:2379"]}
	
	
	==> etcd [d41ca68871860cbd06029f97c3e6b3d6d453f97cbed07847be7d4fd85243116c] <==
	{"level":"warn","ts":"2025-12-10T06:42:54.643837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.664665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.690355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.707831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.731995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.752527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.768382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.799681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.806274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.819647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.844754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.857628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.871636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.881594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.898380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.910354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.950315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.975367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.985354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:54.996450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:55.013081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:55.067938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:42:59.503933Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.221104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-12-10T06:42:59.504053Z","caller":"traceutil/trace.go:172","msg":"trace[500911789] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:495; }","duration":"119.367742ms","start":"2025-12-10T06:42:59.384672Z","end":"2025-12-10T06:42:59.504039Z","steps":["trace[500911789] 'range keys from in-memory index tree'  (duration: 119.035572ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:42:59.637521Z","caller":"traceutil/trace.go:172","msg":"trace[402092905] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"124.104016ms","start":"2025-12-10T06:42:59.513400Z","end":"2025-12-10T06:42:59.637504Z","steps":["trace[402092905] 'process raft request'  (duration: 124.006427ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:43:14 up 1 min,  0 users,  load average: 1.29, 0.53, 0.20
	Linux pause-824458 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [83d3a0f909ff063be72fdf12614e3a266b9308fa52824b238d7c86c635323fed] <==
	I1210 06:42:55.988540       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:42:55.988562       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:42:55.991607       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:42:56.000407       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 06:42:56.000487       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 06:42:56.000515       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 06:42:56.000622       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 06:42:56.000675       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:42:56.000709       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 06:42:56.000716       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:42:56.015591       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:42:56.021105       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 06:42:56.021239       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:42:56.022709       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:42:56.032089       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:42:56.033502       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:42:56.083331       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:42:56.804531       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:42:57.817355       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:42:57.873766       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:42:57.907410       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:42:57.917855       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:42:59.509665       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:42:59.512779       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:42:59.643880       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [a3e7c9f3faf92754377f2d7012402146233da54bf988c535b125639ddd49d0e1] <==
	W1210 06:42:33.086666       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.086728       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.086783       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.086837       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.086928       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087290       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087459       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087516       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087562       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087602       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087641       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087691       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087806       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.087890       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.088699       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.089505       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090004       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090415       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090584       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090644       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090712       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090770       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090824       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090884       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 06:42:33.090944       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [13df8d1efc84d6aef8b2d764c6e3283e72dac418fccecae1a55b5cbf3e5730d6] <==
	I1210 06:42:59.347120       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 06:42:59.349324       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 06:42:59.351628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 06:42:59.353923       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:42:59.354005       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 06:42:59.358460       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:42:59.358506       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:42:59.358513       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:42:59.359077       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:42:59.363587       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:42:59.364965       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:42:59.374095       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 06:42:59.375417       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 06:42:59.376049       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 06:42:59.378937       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 06:42:59.380278       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 06:42:59.380522       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:42:59.380801       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:42:59.380890       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:42:59.385916       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:42:59.386008       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 06:42:59.387398       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 06:42:59.387401       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:42:59.391681       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:42:59.391697       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [d422e2277bff4635c9f9a581698949beacb8877747403fc91d515859cda070c9] <==
	
	
	==> kube-proxy [369fca45db110a233a2525c67a25cef1f2e23498616fa7f34bd36606411d1b17] <==
	I1210 06:42:56.783164       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:42:56.883816       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:42:56.883914       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.53"]
	E1210 06:42:56.884153       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:42:56.946956       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 06:42:56.947057       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 06:42:56.947128       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:42:56.964514       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:42:56.965221       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:42:56.965979       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:42:56.967767       1 config.go:200] "Starting service config controller"
	I1210 06:42:56.973775       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:42:56.970974       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:42:56.975898       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:42:56.975941       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:42:56.970989       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:42:56.975964       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:42:56.972746       1 config.go:309] "Starting node config controller"
	I1210 06:42:56.975982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:42:56.975989       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:42:57.074741       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:42:57.077272       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [9adafa9471f7452aae5f06c1d5d8310941fff1b98251b5a48bf33b747728c46a] <==
	I1210 06:41:53.134551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:41:53.239466       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:41:53.239517       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.53"]
	E1210 06:41:53.239650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:41:53.330066       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 06:41:53.330157       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 06:41:53.330272       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:41:53.340936       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:41:53.341458       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:41:53.341484       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:41:53.348674       1 config.go:200] "Starting service config controller"
	I1210 06:41:53.348840       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:41:53.348935       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:41:53.350894       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:41:53.348947       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:41:53.351021       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:41:53.349459       1 config.go:309] "Starting node config controller"
	I1210 06:41:53.351121       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:41:53.351129       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:41:53.451895       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:41:53.451941       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:41:53.451962       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [75ba0c875bd7e5548f63e608387f3f09b526490e04cabf2cd1e939aff0a70fdb] <==
	I1210 06:42:55.432609       1 serving.go:386] Generated self-signed cert in-memory
	I1210 06:42:56.086849       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 06:42:56.086967       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:42:56.097383       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:42:56.097392       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 06:42:56.097414       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1210 06:42:56.097458       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:42:56.097460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:42:56.097465       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:42:56.097484       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:42:56.097490       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:42:56.197736       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1210 06:42:56.197917       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:42:56.198297       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [ded8e61ceef813b1679fe247bef54e58488cbeed324d110c81a0c8f03c012dea] <==
	E1210 06:41:44.112123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:41:44.112258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:41:44.110604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:41:44.112586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 06:41:44.112714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:41:44.933762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 06:41:44.993664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:41:44.994040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:41:45.111778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:41:45.112383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:41:45.138287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:41:45.138376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:41:45.191921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:41:45.231723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:41:45.262780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:41:45.267021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 06:41:45.353583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:41:45.379136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:41:45.394404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:41:45.411911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1210 06:41:47.495867       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:42:33.063926       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 06:42:33.067777       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 06:42:33.067817       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1210 06:42:33.067848       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 10 06:42:53 pause-824458 kubelet[3276]: E1210 06:42:53.161987    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:53 pause-824458 kubelet[3276]: E1210 06:42:53.170743    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:53 pause-824458 kubelet[3276]: E1210 06:42:53.175694    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:53 pause-824458 kubelet[3276]: E1210 06:42:53.176509    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:53 pause-824458 kubelet[3276]: I1210 06:42:53.670220    3276 kubelet_node_status.go:75] "Attempting to register node" node="pause-824458"
	Dec 10 06:42:54 pause-824458 kubelet[3276]: E1210 06:42:54.182803    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:54 pause-824458 kubelet[3276]: E1210 06:42:54.183346    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:54 pause-824458 kubelet[3276]: E1210 06:42:54.183447    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:54 pause-824458 kubelet[3276]: E1210 06:42:54.183827    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:55 pause-824458 kubelet[3276]: E1210 06:42:55.190699    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:55 pause-824458 kubelet[3276]: E1210 06:42:55.191116    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:55 pause-824458 kubelet[3276]: E1210 06:42:55.248422    3276 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-824458\" not found" node="pause-824458"
	Dec 10 06:42:55 pause-824458 kubelet[3276]: I1210 06:42:55.979438    3276 apiserver.go:52] "Watching apiserver"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.013160    3276 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.047460    3276 kubelet_node_status.go:124] "Node was previously registered" node="pause-824458"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.047565    3276 kubelet_node_status.go:78] "Successfully registered node" node="pause-824458"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.047587    3276 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.049802    3276 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.079256    3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a52a9b4-a54f-4b7d-9313-9597f8681754-lib-modules\") pod \"kube-proxy-fzkpb\" (UID: \"9a52a9b4-a54f-4b7d-9313-9597f8681754\") " pod="kube-system/kube-proxy-fzkpb"
	Dec 10 06:42:56 pause-824458 kubelet[3276]: I1210 06:42:56.080022    3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a52a9b4-a54f-4b7d-9313-9597f8681754-xtables-lock\") pod \"kube-proxy-fzkpb\" (UID: \"9a52a9b4-a54f-4b7d-9313-9597f8681754\") " pod="kube-system/kube-proxy-fzkpb"
	Dec 10 06:43:02 pause-824458 kubelet[3276]: E1210 06:43:02.164274    3276 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765348982163977590 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 10 06:43:02 pause-824458 kubelet[3276]: E1210 06:43:02.164296    3276 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765348982163977590 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 10 06:43:02 pause-824458 kubelet[3276]: I1210 06:43:02.862088    3276 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 06:43:12 pause-824458 kubelet[3276]: E1210 06:43:12.167336    3276 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765348992166708542 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 10 06:43:12 pause-824458 kubelet[3276]: E1210 06:43:12.167377    3276 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765348992166708542 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-824458 -n pause-824458
helpers_test.go:270: (dbg) Run:  kubectl --context pause-824458 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (48.00s)

                                                
                                    

Test pass (375/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.44
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 9.78
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.61
18 TestDownloadOnly/v1.34.2/DeleteAll 0.16
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 10.72
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.16
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.64
31 TestOffline 107.32
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 129.78
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 10.53
44 TestAddons/parallel/Registry 18.98
45 TestAddons/parallel/RegistryCreds 0.67
47 TestAddons/parallel/InspektorGadget 11.72
48 TestAddons/parallel/MetricsServer 5.75
50 TestAddons/parallel/CSI 49.17
51 TestAddons/parallel/Headlamp 22.9
52 TestAddons/parallel/CloudSpanner 5.56
53 TestAddons/parallel/LocalPath 57.72
54 TestAddons/parallel/NvidiaDevicePlugin 7.04
55 TestAddons/parallel/Yakd 11.81
57 TestAddons/StoppedEnableDisable 72.22
58 TestCertOptions 51.67
59 TestCertExpiration 300.54
61 TestForceSystemdFlag 39.7
62 TestForceSystemdEnv 54.88
67 TestErrorSpam/setup 35.47
68 TestErrorSpam/start 0.33
69 TestErrorSpam/status 0.66
70 TestErrorSpam/pause 1.52
71 TestErrorSpam/unpause 1.8
72 TestErrorSpam/stop 87.81
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 52.84
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 35
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.37
84 TestFunctional/serial/CacheCmd/cache/add_local 2.15
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 39.46
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.31
95 TestFunctional/serial/LogsFileCmd 1.32
96 TestFunctional/serial/InvalidService 4.35
98 TestFunctional/parallel/ConfigCmd 0.43
99 TestFunctional/parallel/DashboardCmd 12.86
100 TestFunctional/parallel/DryRun 0.24
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.77
106 TestFunctional/parallel/ServiceCmdConnect 32.44
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 49.69
110 TestFunctional/parallel/SSHCmd 0.37
111 TestFunctional/parallel/CpCmd 1.22
112 TestFunctional/parallel/MySQL 36.53
113 TestFunctional/parallel/FileSync 0.21
114 TestFunctional/parallel/CertSync 1.22
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
122 TestFunctional/parallel/License 0.33
123 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
125 TestFunctional/parallel/ProfileCmd/profile_list 0.46
126 TestFunctional/parallel/MountCmd/any-port 8.94
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
128 TestFunctional/parallel/Version/short 0.07
129 TestFunctional/parallel/Version/components 0.63
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
133 TestFunctional/parallel/ServiceCmd/List 0.52
134 TestFunctional/parallel/MountCmd/specific-port 1.64
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
137 TestFunctional/parallel/ServiceCmd/Format 0.32
138 TestFunctional/parallel/ServiceCmd/URL 0.33
139 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
149 TestFunctional/parallel/ImageCommands/ImageListShort 0.85
150 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
151 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
152 TestFunctional/parallel/ImageCommands/ImageListYaml 0.65
153 TestFunctional/parallel/ImageCommands/ImageBuild 12.9
154 TestFunctional/parallel/ImageCommands/Setup 1.74
155 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
156 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 74.46
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 52.46
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.07
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.3
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.11
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.56
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 30.83
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.31
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.31
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.75
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.4
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 15.06
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.23
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.67
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 31.46
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 48.13
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.35
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.13
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 39.39
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.16
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.16
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.38
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.64
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.2
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.37
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.33
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.35
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 8.15
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.48
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.41
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.45
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.33
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.33
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.57
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.42
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.43
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.18
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.18
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.21
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.18
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.37
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 2.26
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.07
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.07
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.24
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.92
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.54
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.88
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 1.38
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.55
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 209.11
262 TestMultiControlPlane/serial/DeployApp 6.88
263 TestMultiControlPlane/serial/PingHostFromPods 1.29
264 TestMultiControlPlane/serial/AddWorkerNode 43.39
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.67
267 TestMultiControlPlane/serial/CopyFile 10.88
268 TestMultiControlPlane/serial/StopSecondaryNode 90.03
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
270 TestMultiControlPlane/serial/RestartSecondaryNode 44.47
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.68
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 368.89
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.32
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
275 TestMultiControlPlane/serial/StopCluster 238.07
276 TestMultiControlPlane/serial/RestartCluster 89.31
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
278 TestMultiControlPlane/serial/AddSecondaryNode 80.26
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.69
284 TestJSONOutput/start/Command 74.71
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.69
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.62
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 7.43
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.23
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 77.9
316 TestMountStart/serial/StartWithMountFirst 19.57
317 TestMountStart/serial/VerifyMountFirst 0.3
318 TestMountStart/serial/StartWithMountSecond 21.06
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.69
321 TestMountStart/serial/VerifyMountPostDelete 0.3
322 TestMountStart/serial/Stop 1.21
323 TestMountStart/serial/RestartStopped 18.57
324 TestMountStart/serial/VerifyMountPostStop 0.3
327 TestMultiNode/serial/FreshStart2Nodes 97.45
328 TestMultiNode/serial/DeployApp2Nodes 5.88
329 TestMultiNode/serial/PingHostFrom2Pods 0.84
330 TestMultiNode/serial/AddNode 42.38
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.46
333 TestMultiNode/serial/CopyFile 6.05
334 TestMultiNode/serial/StopNode 2.28
335 TestMultiNode/serial/StartAfterStop 40.98
336 TestMultiNode/serial/RestartKeepsNodes 289.28
337 TestMultiNode/serial/DeleteNode 2.55
338 TestMultiNode/serial/StopMultiNode 171.39
339 TestMultiNode/serial/RestartMultiNode 83.16
340 TestMultiNode/serial/ValidateNameConflict 41.32
347 TestScheduledStopUnix 107.15
351 TestRunningBinaryUpgrade 369.66
356 TestPause/serial/Start 74.42
358 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 80.38
367 TestNetworkPlugins/group/false 3.46
372 TestNoKubernetes/serial/StartWithStopK8s 17.92
373 TestNoKubernetes/serial/Start 19.13
374 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
375 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
376 TestNoKubernetes/serial/ProfileList 1.16
377 TestNoKubernetes/serial/Stop 1.42
378 TestNoKubernetes/serial/StartNoArgs 36.4
379 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
380 TestStoppedBinaryUpgrade/Setup 3.32
381 TestStoppedBinaryUpgrade/Upgrade 107.35
382 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
390 TestISOImage/Setup 19.66
392 TestISOImage/Binaries/crictl 0.28
393 TestISOImage/Binaries/curl 0.17
394 TestISOImage/Binaries/docker 0.16
395 TestISOImage/Binaries/git 0.15
396 TestISOImage/Binaries/iptables 0.24
397 TestISOImage/Binaries/podman 0.16
398 TestISOImage/Binaries/rsync 0.17
399 TestISOImage/Binaries/socat 0.17
400 TestISOImage/Binaries/wget 0.16
401 TestISOImage/Binaries/VBoxControl 0.16
402 TestISOImage/Binaries/VBoxService 0.16
403 TestNetworkPlugins/group/auto/Start 95.04
404 TestNetworkPlugins/group/custom-flannel/Start 70.06
405 TestNetworkPlugins/group/auto/KubeletFlags 0.17
406 TestNetworkPlugins/group/auto/NetCatPod 10.23
407 TestNetworkPlugins/group/auto/DNS 0.15
408 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
409 TestNetworkPlugins/group/auto/Localhost 0.13
410 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
411 TestNetworkPlugins/group/auto/HairPin 0.15
412 TestNetworkPlugins/group/custom-flannel/DNS 0.21
413 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
414 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
415 TestNetworkPlugins/group/kindnet/Start 91.98
416 TestNetworkPlugins/group/flannel/Start 80
417 TestNetworkPlugins/group/enable-default-cni/Start 90.72
418 TestNetworkPlugins/group/flannel/ControllerPod 6.01
419 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
420 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
421 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
422 TestNetworkPlugins/group/flannel/NetCatPod 10.26
423 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
424 TestNetworkPlugins/group/flannel/DNS 0.19
425 TestNetworkPlugins/group/kindnet/DNS 0.19
426 TestNetworkPlugins/group/flannel/Localhost 0.15
427 TestNetworkPlugins/group/kindnet/Localhost 0.15
428 TestNetworkPlugins/group/flannel/HairPin 0.16
429 TestNetworkPlugins/group/kindnet/HairPin 0.14
430 TestNetworkPlugins/group/bridge/Start 84.63
431 TestNetworkPlugins/group/calico/Start 94.42
432 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
433 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
434 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
435 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
436 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
438 TestStartStop/group/old-k8s-version/serial/FirstStart 61.88
439 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
440 TestNetworkPlugins/group/bridge/NetCatPod 11.59
441 TestNetworkPlugins/group/calico/ControllerPod 6.01
442 TestNetworkPlugins/group/bridge/DNS 0.16
443 TestNetworkPlugins/group/bridge/Localhost 0.15
444 TestNetworkPlugins/group/bridge/HairPin 0.14
445 TestStartStop/group/old-k8s-version/serial/DeployApp 11.32
446 TestNetworkPlugins/group/calico/KubeletFlags 0.19
447 TestNetworkPlugins/group/calico/NetCatPod 10.27
448 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.22
449 TestStartStop/group/old-k8s-version/serial/Stop 87.09
451 TestStartStop/group/no-preload/serial/FirstStart 94.85
452 TestNetworkPlugins/group/calico/DNS 0.23
453 TestNetworkPlugins/group/calico/Localhost 0.13
454 TestNetworkPlugins/group/calico/HairPin 0.2
456 TestStartStop/group/embed-certs/serial/FirstStart 87.39
457 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
458 TestStartStop/group/old-k8s-version/serial/SecondStart 43.32
459 TestStartStop/group/no-preload/serial/DeployApp 10.38
460 TestStartStop/group/embed-certs/serial/DeployApp 12.35
461 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
462 TestStartStop/group/no-preload/serial/Stop 90.19
463 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
464 TestStartStop/group/embed-certs/serial/Stop 73.98
465 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 18.01
466 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
467 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
468 TestStartStop/group/old-k8s-version/serial/Pause 2.52
470 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.02
471 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
472 TestStartStop/group/embed-certs/serial/SecondStart 46.05
473 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
474 TestStartStop/group/no-preload/serial/SecondStart 66.92
475 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
476 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.01
477 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
478 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
479 TestStartStop/group/default-k8s-diff-port/serial/Stop 86.23
480 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
481 TestStartStop/group/embed-certs/serial/Pause 2.59
483 TestStartStop/group/newest-cni/serial/FirstStart 42.49
484 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
485 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
486 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
487 TestStartStop/group/no-preload/serial/Pause 2.62
489 TestISOImage/PersistentMounts//data 0.17
490 TestISOImage/PersistentMounts//var/lib/docker 0.16
491 TestISOImage/PersistentMounts//var/lib/cni 0.16
492 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
493 TestISOImage/PersistentMounts//var/lib/minikube 0.16
494 TestISOImage/PersistentMounts//var/lib/toolbox 0.18
495 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
496 TestISOImage/VersionJSON 0.18
497 TestISOImage/eBPFSupport 0.17
498 TestStartStop/group/newest-cni/serial/DeployApp 0
499 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.93
500 TestStartStop/group/newest-cni/serial/Stop 7.05
501 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
502 TestStartStop/group/newest-cni/serial/SecondStart 31.51
503 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
504 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 42.96
505 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
506 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
507 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
508 TestStartStop/group/newest-cni/serial/Pause 2.35
509 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
510 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
511 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
512 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.4
x
+
TestDownloadOnly/v1.28.0/json-events (22.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-996972 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-996972 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.434877284s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 05:44:09.886348   12588 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1210 05:44:09.886466   12588 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-996972
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-996972: exit status 85 (74.685808ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-996972 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-996972 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:43:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:43:47.504994   12600 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:43:47.505129   12600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:47.505140   12600 out.go:374] Setting ErrFile to fd 2...
	I1210 05:43:47.505146   12600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:47.505349   12600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	W1210 05:43:47.505874   12600 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22089-8667/.minikube/config/config.json: open /home/jenkins/minikube-integration/22089-8667/.minikube/config/config.json: no such file or directory
	I1210 05:43:47.506791   12600 out.go:368] Setting JSON to true
	I1210 05:43:47.507661   12600 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1572,"bootTime":1765343856,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:43:47.507777   12600 start.go:143] virtualization: kvm guest
	I1210 05:43:47.512455   12600 out.go:99] [download-only-996972] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1210 05:43:47.512594   12600 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 05:43:47.512639   12600 notify.go:221] Checking for updates...
	I1210 05:43:47.513979   12600 out.go:171] MINIKUBE_LOCATION=22089
	I1210 05:43:47.515126   12600 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:43:47.516323   12600 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 05:43:47.517609   12600 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 05:43:47.518985   12600 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 05:43:47.524805   12600 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:43:47.525059   12600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:43:48.030131   12600 out.go:99] Using the kvm2 driver based on user configuration
	I1210 05:43:48.030197   12600 start.go:309] selected driver: kvm2
	I1210 05:43:48.030207   12600 start.go:927] validating driver "kvm2" against <nil>
	I1210 05:43:48.030583   12600 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:43:48.031092   12600 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1210 05:43:48.031264   12600 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:43:48.031290   12600 cni.go:84] Creating CNI manager for ""
	I1210 05:43:48.031390   12600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:43:48.031405   12600 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:43:48.031463   12600 start.go:353] cluster config:
	{Name:download-only-996972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-996972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:43:48.031686   12600 iso.go:125] acquiring lock: {Name:mk873366a783b9f735599145b1cff21faf50318e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:43:48.033273   12600 out.go:99] Downloading VM boot image ...
	I1210 05:43:48.033321   12600 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22089-8667/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 05:43:58.180196   12600 out.go:99] Starting "download-only-996972" primary control-plane node in "download-only-996972" cluster
	I1210 05:43:58.180239   12600 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 05:43:58.273901   12600 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1210 05:43:58.273956   12600 cache.go:65] Caching tarball of preloaded images
	I1210 05:43:58.274159   12600 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 05:43:58.276049   12600 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1210 05:43:58.276081   12600 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1210 05:43:58.375829   12600 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1210 05:43:58.375969   12600 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-996972 host does not exist
	  To start a cluster, run: "minikube start -p download-only-996972"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-996972
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-444246 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-444246 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.774850441s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1210 05:44:20.042665   12588 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1210 05:44:20.042705   12588 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-444246
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-444246: exit status 85 (612.774117ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-996972 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-996972 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │ 10 Dec 25 05:44 UTC │
	│ delete  │ -p download-only-996972                                                                                                                                                 │ download-only-996972 │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │ 10 Dec 25 05:44 UTC │
	│ start   │ -o=json --download-only -p download-only-444246 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-444246 │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:44:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:44:10.319282   12863 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:44:10.319525   12863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:44:10.319533   12863 out.go:374] Setting ErrFile to fd 2...
	I1210 05:44:10.319537   12863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:44:10.319722   12863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 05:44:10.320161   12863 out.go:368] Setting JSON to true
	I1210 05:44:10.320982   12863 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1594,"bootTime":1765343856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:44:10.321034   12863 start.go:143] virtualization: kvm guest
	I1210 05:44:10.323084   12863 out.go:99] [download-only-444246] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:44:10.323253   12863 notify.go:221] Checking for updates...
	I1210 05:44:10.324442   12863 out.go:171] MINIKUBE_LOCATION=22089
	I1210 05:44:10.325818   12863 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:44:10.327347   12863 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 05:44:10.328666   12863 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 05:44:10.329873   12863 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 05:44:10.332254   12863 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:44:10.332494   12863 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:44:10.364540   12863 out.go:99] Using the kvm2 driver based on user configuration
	I1210 05:44:10.364583   12863 start.go:309] selected driver: kvm2
	I1210 05:44:10.364589   12863 start.go:927] validating driver "kvm2" against <nil>
	I1210 05:44:10.364897   12863 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:44:10.365420   12863 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1210 05:44:10.365584   12863 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:44:10.365611   12863 cni.go:84] Creating CNI manager for ""
	I1210 05:44:10.365654   12863 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:44:10.365661   12863 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:44:10.365700   12863 start.go:353] cluster config:
	{Name:download-only-444246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-444246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:44:10.365823   12863 iso.go:125] acquiring lock: {Name:mk873366a783b9f735599145b1cff21faf50318e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:44:10.367780   12863 out.go:99] Starting "download-only-444246" primary control-plane node in "download-only-444246" cluster
	I1210 05:44:10.367813   12863 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 05:44:10.817875   12863 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 05:44:10.817912   12863 cache.go:65] Caching tarball of preloaded images
	I1210 05:44:10.818062   12863 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 05:44:10.820413   12863 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1210 05:44:10.820435   12863 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1210 05:44:10.915186   12863 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1210 05:44:10.915228   12863 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-444246 host does not exist
	  To start a cluster, run: "minikube start -p download-only-444246"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-444246
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (10.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-841800 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-841800 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.724005819s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (10.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1210 05:44:31.689277   12588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1210 05:44:31.689326   12588 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-841800
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-841800: exit status 85 (73.386664ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-996972 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-996972 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │ 10 Dec 25 05:44 UTC │
	│ delete  │ -p download-only-996972                                                                                                                                                        │ download-only-996972 │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │ 10 Dec 25 05:44 UTC │
	│ start   │ -o=json --download-only -p download-only-444246 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-444246 │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │ 10 Dec 25 05:44 UTC │
	│ delete  │ -p download-only-444246                                                                                                                                                        │ download-only-444246 │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │ 10 Dec 25 05:44 UTC │
	│ start   │ -o=json --download-only -p download-only-841800 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-841800 │ jenkins │ v1.37.0 │ 10 Dec 25 05:44 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:44:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:44:21.016008   13059 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:44:21.016241   13059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:44:21.016248   13059 out.go:374] Setting ErrFile to fd 2...
	I1210 05:44:21.016252   13059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:44:21.016437   13059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 05:44:21.016879   13059 out.go:368] Setting JSON to true
	I1210 05:44:21.017621   13059 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1605,"bootTime":1765343856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:44:21.017675   13059 start.go:143] virtualization: kvm guest
	I1210 05:44:21.020045   13059 out.go:99] [download-only-841800] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:44:21.020184   13059 notify.go:221] Checking for updates...
	I1210 05:44:21.022161   13059 out.go:171] MINIKUBE_LOCATION=22089
	I1210 05:44:21.023628   13059 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:44:21.028688   13059 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 05:44:21.030315   13059 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 05:44:21.031792   13059 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 05:44:21.034413   13059 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:44:21.034705   13059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:44:21.065072   13059 out.go:99] Using the kvm2 driver based on user configuration
	I1210 05:44:21.065100   13059 start.go:309] selected driver: kvm2
	I1210 05:44:21.065105   13059 start.go:927] validating driver "kvm2" against <nil>
	I1210 05:44:21.065407   13059 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:44:21.065906   13059 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1210 05:44:21.066047   13059 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:44:21.066068   13059 cni.go:84] Creating CNI manager for ""
	I1210 05:44:21.066115   13059 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:44:21.066124   13059 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:44:21.066159   13059 start.go:353] cluster config:
	{Name:download-only-841800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-841800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:44:21.066249   13059 iso.go:125] acquiring lock: {Name:mk873366a783b9f735599145b1cff21faf50318e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:44:21.067696   13059 out.go:99] Starting "download-only-841800" primary control-plane node in "download-only-841800" cluster
	I1210 05:44:21.067714   13059 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 05:44:21.511964   13059 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 05:44:21.512017   13059 cache.go:65] Caching tarball of preloaded images
	I1210 05:44:21.512207   13059 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 05:44:21.514230   13059 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1210 05:44:21.514251   13059 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1210 05:44:21.613656   13059 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1210 05:44:21.613702   13059 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22089-8667/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 05:44:29.911434   13059 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 05:44:29.911785   13059 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/download-only-841800/config.json ...
	I1210 05:44:29.911811   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/download-only-841800/config.json: {Name:mk7df93b2ae78084f4223f32bfc9de9b17257bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:29.912000   13059 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 05:44:29.912199   13059 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22089-8667/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl
	
	
	* The control-plane node download-only-841800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-841800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-841800
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 05:44:32.490624   12588 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-952457 --alsologtostderr --binary-mirror http://127.0.0.1:42765 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-952457" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-952457
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (107.32s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-832745 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-832745 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m46.245093798s)
helpers_test.go:176: Cleaning up "offline-crio-832745" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-832745
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-832745: (1.070613608s)
--- PASS: TestOffline (107.32s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-873698
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-873698: exit status 85 (66.873685ms)

                                                
                                                
-- stdout --
	* Profile "addons-873698" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-873698"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-873698
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-873698: exit status 85 (67.168082ms)

                                                
                                                
-- stdout --
	* Profile "addons-873698" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-873698"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (129.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-873698 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-873698 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.77916185s)
--- PASS: TestAddons/Setup (129.78s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-873698 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-873698 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-873698 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-873698 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [72587d64-2d5b-41de-bf62-e638cb2f27ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [72587d64-2d5b-41de-bf62-e638cb2f27ce] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004570351s
addons_test.go:696: (dbg) Run:  kubectl --context addons-873698 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-873698 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-873698 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.195882ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-j46wr" [e47de5e6-f940-443e-ae45-290cf2aa6613] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005520513s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-dpzx2" [8e5876c7-e3be-42b2-a3bb-a526b0413ef8] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004477159s
addons_test.go:394: (dbg) Run:  kubectl --context addons-873698 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-873698 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-873698 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.015068682s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 ip
2025/12/10 05:47:20 [DEBUG] GET http://192.168.39.151:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.98s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.803825ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-873698
addons_test.go:334: (dbg) Run:  kubectl --context addons-873698 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-zwlbq" [e1131dbb-6940-41ec-a49c-9c9e4dfa2b5b] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004969586s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-873698 addons disable inspektor-gadget --alsologtostderr -v=1: (5.717047163s)
--- PASS: TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 5.58251ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-9jqpv" [f48fc950-6d12-4a7a-adca-ccf9ada338da] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003794801s
addons_test.go:465: (dbg) Run:  kubectl --context addons-873698 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 05:47:09.869883   12588 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 05:47:09.898461   12588 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 05:47:09.898487   12588 kapi.go:107] duration metric: took 28.615676ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 28.627383ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-873698 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-873698 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [10129cbb-d8e8-41d8-9e89-dfd2b3619301] Pending
helpers_test.go:353: "task-pv-pod" [10129cbb-d8e8-41d8-9e89-dfd2b3619301] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [10129cbb-d8e8-41d8-9e89-dfd2b3619301] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.004669336s
addons_test.go:574: (dbg) Run:  kubectl --context addons-873698 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-873698 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-873698 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-873698 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-873698 delete pod task-pv-pod: (1.493547622s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-873698 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-873698 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-873698 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [c6131ec3-baad-45d3-8af3-28eba2edfb8e] Pending
helpers_test.go:353: "task-pv-pod-restore" [c6131ec3-baad-45d3-8af3-28eba2edfb8e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [c6131ec3-baad-45d3-8af3-28eba2edfb8e] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003390608s
addons_test.go:616: (dbg) Run:  kubectl --context addons-873698 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-873698 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-873698 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-873698 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.862485258s)
--- PASS: TestAddons/parallel/CSI (49.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-873698 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-873698 --alsologtostderr -v=1: (1.091739685s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-rc7jj" [b5f98e05-ddb2-4587-9ac8-12a10c7ebb56] Pending
helpers_test.go:353: "headlamp-dfcdc64b-rc7jj" [b5f98e05-ddb2-4587-9ac8-12a10c7ebb56] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-rc7jj" [b5f98e05-ddb2-4587-9ac8-12a10c7ebb56] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.004124452s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-873698 addons disable headlamp --alsologtostderr -v=1: (5.801501636s)
--- PASS: TestAddons/parallel/Headlamp (22.90s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-g9sqh" [8a9a5394-1ed0-4569-b2a7-b091ccba2d25] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004982814s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.72s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-873698 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-873698 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-873698 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [48e3a626-39c5-4c14-8f41-ebf98a6ad4ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [48e3a626-39c5-4c14-8f41-ebf98a6ad4ce] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [48e3a626-39c5-4c14-8f41-ebf98a6ad4ce] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.003741861s
addons_test.go:969: (dbg) Run:  kubectl --context addons-873698 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 ssh "cat /opt/local-path-provisioner/pvc-c7207705-a0ff-4ab8-a446-2828f4377906_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-873698 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-873698 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-873698 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.921004522s)
--- PASS: TestAddons/parallel/LocalPath (57.72s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.04s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-8lp5b" [c33df452-a40c-4ce5-933c-cb75f4e74e60] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006938465s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-873698 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.036291603s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.04s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-2kp4h" [4e256e77-5a39-4c94-b92e-30ba80e006d1] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003552128s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-873698 addons disable yakd --alsologtostderr -v=1: (5.806291292s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (72.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-873698
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-873698: (1m12.016064901s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-873698
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-873698
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-873698
--- PASS: TestAddons/StoppedEnableDisable (72.22s)

                                                
                                    
x
+
TestCertOptions (51.67s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-802205 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-802205 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (50.334597888s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-802205 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-802205 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-802205 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-802205" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-802205
--- PASS: TestCertOptions (51.67s)

                                                
                                    
x
+
TestCertExpiration (300.54s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-096353 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1210 06:41:27.265621   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:41:44.186711   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-096353 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m29.760944867s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-096353 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-096353 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (29.646071355s)
helpers_test.go:176: Cleaning up "cert-expiration-096353" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-096353
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-096353: (1.13388337s)
--- PASS: TestCertExpiration (300.54s)

                                                
                                    
x
+
TestForceSystemdFlag (39.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-734845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1210 06:45:44.979173   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-734845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (38.315700871s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-734845 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-734845" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-734845
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-734845: (1.209222191s)
--- PASS: TestForceSystemdFlag (39.70s)

                                                
                                    
x
+
TestForceSystemdEnv (54.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-632752 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-632752 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.962345813s)
helpers_test.go:176: Cleaning up "force-systemd-env-632752" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-632752
--- PASS: TestForceSystemdEnv (54.88s)

                                                
                                    
x
+
TestErrorSpam/setup (35.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-727817 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-727817 --driver=kvm2  --container-runtime=crio
E1210 05:51:44.194604   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:44.201044   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:44.212420   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:44.233883   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:44.275387   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:44.356907   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:44.518473   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:44.840155   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:45.482305   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:46.763939   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:49.325291   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-727817 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-727817 --driver=kvm2  --container-runtime=crio: (35.468354737s)
--- PASS: TestErrorSpam/setup (35.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 status
--- PASS: TestErrorSpam/status (0.66s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (87.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 stop
E1210 05:51:54.446629   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:52:04.688461   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:52:25.170258   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:06.131755   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 stop: (1m25.041654049s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-727817 --log_dir /tmp/nospam-727817 stop: (1.823202151s)
--- PASS: TestErrorSpam/stop (87.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/test/nested/copy/12588/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736676 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-736676 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.835306144s)
--- PASS: TestFunctional/serial/StartWithProxy (52.84s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 05:54:15.375291   12588 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736676 --alsologtostderr -v=8
E1210 05:54:28.056413   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-736676 --alsologtostderr -v=8: (35.001131254s)
functional_test.go:678: soft start took 35.001799551s for "functional-736676" cluster.
I1210 05:54:50.376770   12588 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (35.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-736676 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-736676 cache add registry.k8s.io/pause:3.1: (1.158566098s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-736676 cache add registry.k8s.io/pause:3.3: (1.128510034s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-736676 cache add registry.k8s.io/pause:latest: (1.087304434s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-736676 /tmp/TestFunctionalserialCacheCmdcacheadd_local271262752/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 cache add minikube-local-cache-test:functional-736676
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-736676 cache add minikube-local-cache-test:functional-736676: (1.768120337s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 cache delete minikube-local-cache-test:functional-736676
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-736676
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736676 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (174.595245ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 kubectl -- --context functional-736676 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-736676 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736676 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-736676 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.457107084s)
functional_test.go:776: restart took 39.457232819s for "functional-736676" cluster.
I1210 05:55:37.725892   12588 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (39.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-736676 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-736676 logs: (1.313611119s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 logs --file /tmp/TestFunctionalserialLogsFileCmd3862930834/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-736676 logs --file /tmp/TestFunctionalserialLogsFileCmd3862930834/001/logs.txt: (1.316322592s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-736676 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-736676
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-736676: exit status 115 (233.195469ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.211:30510 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-736676 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736676 config get cpus: exit status 14 (76.474865ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736676 config get cpus: exit status 14 (65.799992ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-736676 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-736676 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 18437: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736676 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-736676 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (117.950956ms)

                                                
                                                
-- stdout --
	* [functional-736676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:55:46.509122   18296 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:55:46.509223   18296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:55:46.509233   18296 out.go:374] Setting ErrFile to fd 2...
	I1210 05:55:46.509237   18296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:55:46.509511   18296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 05:55:46.509938   18296 out.go:368] Setting JSON to false
	I1210 05:55:46.510778   18296 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2291,"bootTime":1765343856,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:55:46.510859   18296 start.go:143] virtualization: kvm guest
	I1210 05:55:46.512891   18296 out.go:179] * [functional-736676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:55:46.514396   18296 notify.go:221] Checking for updates...
	I1210 05:55:46.514659   18296 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:55:46.516181   18296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:55:46.517796   18296 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 05:55:46.519092   18296 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 05:55:46.520279   18296 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:55:46.521611   18296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:55:46.523296   18296 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:55:46.523955   18296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:55:46.558918   18296 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 05:55:46.560432   18296 start.go:309] selected driver: kvm2
	I1210 05:55:46.560447   18296 start.go:927] validating driver "kvm2" against &{Name:functional-736676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-736676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:55:46.560584   18296 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:55:46.563122   18296 out.go:203] 
	W1210 05:55:46.564550   18296 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 05:55:46.565780   18296 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736676 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736676 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-736676 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (121.919775ms)

                                                
                                                
-- stdout --
	* [functional-736676] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:55:46.061968   18228 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:55:46.062093   18228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:55:46.062102   18228 out.go:374] Setting ErrFile to fd 2...
	I1210 05:55:46.062107   18228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:55:46.062447   18228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 05:55:46.062866   18228 out.go:368] Setting JSON to false
	I1210 05:55:46.063746   18228 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2290,"bootTime":1765343856,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:55:46.063810   18228 start.go:143] virtualization: kvm guest
	I1210 05:55:46.066267   18228 out.go:179] * [functional-736676] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 05:55:46.069559   18228 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:55:46.069581   18228 notify.go:221] Checking for updates...
	I1210 05:55:46.072014   18228 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:55:46.073204   18228 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 05:55:46.074340   18228 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 05:55:46.075451   18228 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:55:46.076697   18228 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:55:46.078705   18228 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:55:46.079292   18228 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:55:46.112890   18228 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1210 05:55:46.113979   18228 start.go:309] selected driver: kvm2
	I1210 05:55:46.113994   18228 start.go:927] validating driver "kvm2" against &{Name:functional-736676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-736676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:55:46.114108   18228 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:55:46.117472   18228 out.go:203] 
	W1210 05:55:46.118806   18228 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 05:55:46.119947   18228 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-736676 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-736676 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-xslh6" [8968767b-d762-4645-8128-f1d97cebe8e1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-xslh6" [8968767b-d762-4645-8128-f1d97cebe8e1] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 32.004700832s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.211:31358
functional_test.go:1680: http://192.168.39.211:31358: success! body:
Request served by hello-node-connect-7d85dfc575-xslh6

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.211:31358
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (32.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [30903f80-a085-401b-8181-3a40e825c3dc] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007067834s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-736676 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-736676 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-736676 get pvc myclaim -o=json
I1210 05:55:54.655596   12588 retry.go:31] will retry after 2.083883723s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:c96388c9-0c50-4d64-b6b0-24b321fee06f ResourceVersion:745 Generation:0 CreationTimestamp:2025-12-10 05:55:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001833810 VolumeMode:0xc001833820 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-736676 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-736676 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [79e3b77c-e115-4da3-80a1-a6915ed3cb88] Pending
helpers_test.go:353: "sp-pod" [79e3b77c-e115-4da3-80a1-a6915ed3cb88] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [79e3b77c-e115-4da3-80a1-a6915ed3cb88] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 33.004039035s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-736676 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-736676 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-736676 apply -f testdata/storage-provisioner/pod.yaml
I1210 05:56:30.925663   12588 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [9b5aeb10-6c21-4bbf-b440-1dbad9e13fa0] Pending
helpers_test.go:353: "sp-pod" [9b5aeb10-6c21-4bbf-b440-1dbad9e13fa0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003672112s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-736676 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.69s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh -n functional-736676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 cp functional-736676:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1926654671/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh -n functional-736676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh -n functional-736676 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-736676 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-f6stt" [66ec70e5-a8dd-43ef-b24d-7058dc0f265e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-f6stt" [66ec70e5-a8dd-43ef-b24d-7058dc0f265e] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.006807915s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-736676 exec mysql-6bcdcbc558-f6stt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-736676 exec mysql-6bcdcbc558-f6stt -- mysql -ppassword -e "show databases;": exit status 1 (147.904235ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:56:27.155318   12588 retry.go:31] will retry after 875.579137ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-736676 exec mysql-6bcdcbc558-f6stt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-736676 exec mysql-6bcdcbc558-f6stt -- mysql -ppassword -e "show databases;": exit status 1 (188.884996ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:56:28.220308   12588 retry.go:31] will retry after 1.835775186s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-736676 exec mysql-6bcdcbc558-f6stt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-736676 exec mysql-6bcdcbc558-f6stt -- mysql -ppassword -e "show databases;": exit status 1 (142.571351ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:56:30.199613   12588 retry.go:31] will retry after 2.706082672s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-736676 exec mysql-6bcdcbc558-f6stt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12588/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo cat /etc/test/nested/copy/12588/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12588.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo cat /etc/ssl/certs/12588.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12588.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo cat /usr/share/ca-certificates/12588.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/125882.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo cat /etc/ssl/certs/125882.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/125882.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo cat /usr/share/ca-certificates/125882.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-736676 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736676 ssh "sudo systemctl is-active docker": exit status 1 (192.48643ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736676 ssh "sudo systemctl is-active containerd": exit status 1 (166.454393ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-736676 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-736676 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-qjq4x" [3debfec7-ab35-433e-958f-fa59e949070c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-qjq4x" [3debfec7-ab35-433e-958f-fa59e949070c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.005709884s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "397.395397ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.836329ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdany-port4139562670/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765346145213552380" to /tmp/TestFunctionalparallelMountCmdany-port4139562670/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765346145213552380" to /tmp/TestFunctionalparallelMountCmdany-port4139562670/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765346145213552380" to /tmp/TestFunctionalparallelMountCmdany-port4139562670/001/test-1765346145213552380
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.365546ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:55:45.452306   12588 retry.go:31] will retry after 298.282753ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 05:55 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 05:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 05:55 test-1765346145213552380
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh cat /mount-9p/test-1765346145213552380
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-736676 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [0f52a860-ce21-44c3-971c-7f751585bf1e] Pending
helpers_test.go:353: "busybox-mount" [0f52a860-ce21-44c3-971c-7f751585bf1e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [0f52a860-ce21-44c3-971c-7f751585bf1e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [0f52a860-ce21-44c3-971c-7f751585bf1e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005255549s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-736676 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdany-port4139562670/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.94s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "281.683504ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "65.17431ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdspecific-port2248629012/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.389704ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:55:54.375303   12588 retry.go:31] will retry after 679.903559ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdspecific-port2248629012/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736676 ssh "sudo umount -f /mount-9p": exit status 1 (177.593206ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-736676 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdspecific-port2248629012/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 service list -o json
functional_test.go:1504: Took "464.834639ms" to run "out/minikube-linux-amd64 -p functional-736676 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.211:31288
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.211:31288
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1516962583/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1516962583/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1516962583/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T" /mount1: exit status 1 (260.480729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:55:56.057393   12588 retry.go:31] will retry after 536.912338ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh "findmnt -T" /mount3
I1210 05:55:57.083612   12588 detect.go:223] nested VM detected
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-736676 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1516962583/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1516962583/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1516962583/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736676 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-736676
localhost/kicbase/echo-server:functional-736676
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736676 image ls --format short --alsologtostderr:
I1210 05:56:06.479545   19158 out.go:360] Setting OutFile to fd 1 ...
I1210 05:56:06.479840   19158 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:56:06.479851   19158 out.go:374] Setting ErrFile to fd 2...
I1210 05:56:06.479857   19158 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:56:06.480041   19158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
I1210 05:56:06.480677   19158 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:56:06.480800   19158 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:56:06.483032   19158 ssh_runner.go:195] Run: systemctl --version
I1210 05:56:06.485250   19158 main.go:143] libmachine: domain functional-736676 has defined MAC address 52:54:00:6e:12:27 in network mk-functional-736676
I1210 05:56:06.485692   19158 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6e:12:27", ip: ""} in network mk-functional-736676: {Iface:virbr1 ExpiryTime:2025-12-10 06:53:37 +0000 UTC Type:0 Mac:52:54:00:6e:12:27 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:functional-736676 Clientid:01:52:54:00:6e:12:27}
I1210 05:56:06.485723   19158 main.go:143] libmachine: domain functional-736676 has defined IP address 192.168.39.211 and MAC address 52:54:00:6e:12:27 in network mk-functional-736676
I1210 05:56:06.485867   19158 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/functional-736676/id_rsa Username:docker}
I1210 05:56:06.588853   19158 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736676 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/my-image                      │ functional-736676  │ bbf7ae2391fe8 │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/minikube-local-cache-test     │ functional-736676  │ 626cd4949fbde │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-736676  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736676 image ls --format table --alsologtostderr:
I1210 05:56:21.080710   19270 out.go:360] Setting OutFile to fd 1 ...
I1210 05:56:21.080823   19270 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:56:21.080835   19270 out.go:374] Setting ErrFile to fd 2...
I1210 05:56:21.080841   19270 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:56:21.081036   19270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
I1210 05:56:21.081596   19270 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:56:21.081682   19270 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:56:21.083749   19270 ssh_runner.go:195] Run: systemctl --version
I1210 05:56:21.085872   19270 main.go:143] libmachine: domain functional-736676 has defined MAC address 52:54:00:6e:12:27 in network mk-functional-736676
I1210 05:56:21.086306   19270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6e:12:27", ip: ""} in network mk-functional-736676: {Iface:virbr1 ExpiryTime:2025-12-10 06:53:37 +0000 UTC Type:0 Mac:52:54:00:6e:12:27 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:functional-736676 Clientid:01:52:54:00:6e:12:27}
I1210 05:56:21.086331   19270 main.go:143] libmachine: domain functional-736676 has defined IP address 192.168.39.211 and MAC address 52:54:00:6e:12:27 in network mk-functional-736676
I1210 05:56:21.086481   19270 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/functional-736676/id_rsa Username:docker}
I1210 05:56:21.176859   19270 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736676 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"626cd4949fbde223b90c4a67fd53cd1ee59595ec3dfa8fac31712a43d2af2431","repoDigests":["localhost/minikube-local-cache-test@sha256:0e7907a9c660c79b9ca0564bbcb8a6b190fe0721ca61ea0dbf38728322d4a934"],"repoTags":["localhost/minikube-local-cache-test:functional-736676"],"size":"3330"},{"id":"bbf7ae2391fe89e1ff47f6a5a3283380e9ef7d24b11830de1adbc394628e3ce3","repoDigests":["localhost/my-image@sha256:59d13178cd6ff75a244c233218a2ba9379e9fc726848cfd73475359822a57b31"],"repoTags":["localhost/my-image:functional-736676"],"size":"1468600"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/li
brary/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"123ff46d4cc776f45d054676f892a6bd446e6
8916ca0f863e122602817959bd6","repoDigests":["docker.io/library/604f30926e889b983f0b35994057d234ed11c91de737d78a7cb5c34feea93117-tmp@sha256:cb9d572222d0529500cb474420908e58173118c16c7c13d36b36dc0ae91a7ac2"],"repoTags":[],"size":"1466018"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f500
95b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b28
2b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-736676"],"size":"4944818"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha25
6:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8
s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a133982
26c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736676 image ls --format json --alsologtostderr:
I1210 05:56:20.869107   19260 out.go:360] Setting OutFile to fd 1 ...
I1210 05:56:20.869202   19260 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:56:20.869207   19260 out.go:374] Setting ErrFile to fd 2...
I1210 05:56:20.869211   19260 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:56:20.869431   19260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
I1210 05:56:20.869976   19260 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:56:20.870059   19260 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:56:20.872235   19260 ssh_runner.go:195] Run: systemctl --version
I1210 05:56:20.874415   19260 main.go:143] libmachine: domain functional-736676 has defined MAC address 52:54:00:6e:12:27 in network mk-functional-736676
I1210 05:56:20.874822   19260 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6e:12:27", ip: ""} in network mk-functional-736676: {Iface:virbr1 ExpiryTime:2025-12-10 06:53:37 +0000 UTC Type:0 Mac:52:54:00:6e:12:27 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:functional-736676 Clientid:01:52:54:00:6e:12:27}
I1210 05:56:20.874845   19260 main.go:143] libmachine: domain functional-736676 has defined IP address 192.168.39.211 and MAC address 52:54:00:6e:12:27 in network mk-functional-736676
I1210 05:56:20.875005   19260 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/functional-736676/id_rsa Username:docker}
I1210 05:56:20.958526   19260 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736676 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 626cd4949fbde223b90c4a67fd53cd1ee59595ec3dfa8fac31712a43d2af2431
repoDigests:
- localhost/minikube-local-cache-test@sha256:0e7907a9c660c79b9ca0564bbcb8a6b190fe0721ca61ea0dbf38728322d4a934
repoTags:
- localhost/minikube-local-cache-test:functional-736676
size: "3330"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-736676
size: "4944818"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736676 image ls --format yaml --alsologtostderr:
I1210 05:56:07.322851   19169 out.go:360] Setting OutFile to fd 1 ...
I1210 05:56:07.322975   19169 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:56:07.322987   19169 out.go:374] Setting ErrFile to fd 2...
I1210 05:56:07.322993   19169 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:56:07.323311   19169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
I1210 05:56:07.324095   19169 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:56:07.324247   19169 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:56:07.326906   19169 ssh_runner.go:195] Run: systemctl --version
I1210 05:56:07.329758   19169 main.go:143] libmachine: domain functional-736676 has defined MAC address 52:54:00:6e:12:27 in network mk-functional-736676
I1210 05:56:07.330275   19169 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6e:12:27", ip: ""} in network mk-functional-736676: {Iface:virbr1 ExpiryTime:2025-12-10 06:53:37 +0000 UTC Type:0 Mac:52:54:00:6e:12:27 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:functional-736676 Clientid:01:52:54:00:6e:12:27}
I1210 05:56:07.330309   19169 main.go:143] libmachine: domain functional-736676 has defined IP address 192.168.39.211 and MAC address 52:54:00:6e:12:27 in network mk-functional-736676
I1210 05:56:07.330493   19169 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/functional-736676/id_rsa Username:docker}
I1210 05:56:07.433771   19169 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (12.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736676 ssh pgrep buildkitd: exit status 1 (187.452315ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image build -t localhost/my-image:functional-736676 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-736676 image build -t localhost/my-image:functional-736676 testdata/build --alsologtostderr: (12.480206744s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736676 image build -t localhost/my-image:functional-736676 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 123ff46d4cc
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-736676
--> bbf7ae2391f
Successfully tagged localhost/my-image:functional-736676
bbf7ae2391fe89e1ff47f6a5a3283380e9ef7d24b11830de1adbc394628e3ce3
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736676 image build -t localhost/my-image:functional-736676 testdata/build --alsologtostderr:
I1210 05:56:08.160448   19191 out.go:360] Setting OutFile to fd 1 ...
I1210 05:56:08.160815   19191 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:56:08.160827   19191 out.go:374] Setting ErrFile to fd 2...
I1210 05:56:08.160833   19191 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:56:08.161153   19191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
I1210 05:56:08.161971   19191 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:56:08.162711   19191 config.go:182] Loaded profile config "functional-736676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:56:08.164646   19191 ssh_runner.go:195] Run: systemctl --version
I1210 05:56:08.167003   19191 main.go:143] libmachine: domain functional-736676 has defined MAC address 52:54:00:6e:12:27 in network mk-functional-736676
I1210 05:56:08.167521   19191 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6e:12:27", ip: ""} in network mk-functional-736676: {Iface:virbr1 ExpiryTime:2025-12-10 06:53:37 +0000 UTC Type:0 Mac:52:54:00:6e:12:27 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:functional-736676 Clientid:01:52:54:00:6e:12:27}
I1210 05:56:08.167562   19191 main.go:143] libmachine: domain functional-736676 has defined IP address 192.168.39.211 and MAC address 52:54:00:6e:12:27 in network mk-functional-736676
I1210 05:56:08.167726   19191 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/functional-736676/id_rsa Username:docker}
I1210 05:56:08.272308   19191 build_images.go:162] Building image from path: /tmp/build.1805613632.tar
I1210 05:56:08.272426   19191 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 05:56:08.293237   19191 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1805613632.tar
I1210 05:56:08.302552   19191 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1805613632.tar: stat -c "%s %y" /var/lib/minikube/build/build.1805613632.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1805613632.tar': No such file or directory
I1210 05:56:08.302601   19191 ssh_runner.go:362] scp /tmp/build.1805613632.tar --> /var/lib/minikube/build/build.1805613632.tar (3072 bytes)
I1210 05:56:08.351607   19191 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1805613632
I1210 05:56:08.378799   19191 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1805613632 -xf /var/lib/minikube/build/build.1805613632.tar
I1210 05:56:08.400176   19191 crio.go:315] Building image: /var/lib/minikube/build/build.1805613632
I1210 05:56:08.400273   19191 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-736676 /var/lib/minikube/build/build.1805613632 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 05:56:20.539330   19191 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-736676 /var/lib/minikube/build/build.1805613632 --cgroup-manager=cgroupfs: (12.139018562s)
I1210 05:56:20.539440   19191 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1805613632
I1210 05:56:20.557673   19191 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1805613632.tar
I1210 05:56:20.573214   19191 build_images.go:218] Built localhost/my-image:functional-736676 from /tmp/build.1805613632.tar
I1210 05:56:20.573254   19191 build_images.go:134] succeeded building to: functional-736676
I1210 05:56:20.573259   19191 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (12.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.723953145s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-736676
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image load --daemon kicbase/echo-server:functional-736676 --alsologtostderr
2025/12/10 05:55:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-736676 image load --daemon kicbase/echo-server:functional-736676 --alsologtostderr: (1.122940799s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image load --daemon kicbase/echo-server:functional-736676 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-736676
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image load --daemon kicbase/echo-server:functional-736676 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image save kicbase/echo-server:functional-736676 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image rm kicbase/echo-server:functional-736676 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-736676
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-736676 image save --daemon kicbase/echo-server:functional-736676 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-736676
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-736676
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-736676
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-736676
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22089-8667/.minikube/files/etc/test/nested/copy/12588/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (74.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-323414 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1210 05:56:44.188844   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:57:11.898213   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-323414 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m14.459610023s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (74.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (52.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1210 05:57:53.555653   12588 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-323414 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-323414 --alsologtostderr -v=8: (52.461488188s)
functional_test.go:678: soft start took 52.461869579s for "functional-323414" cluster.
I1210 05:58:46.017553   12588 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (52.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-323414 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-323414 cache add registry.k8s.io/pause:3.1: (1.075384365s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-323414 cache add registry.k8s.io/pause:3.3: (1.123729876s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-323414 cache add registry.k8s.io/pause:latest: (1.095833905s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach990277774/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 cache add minikube-local-cache-test:functional-323414
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-323414 cache add minikube-local-cache-test:functional-323414: (1.799659533s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 cache delete minikube-local-cache-test:functional-323414
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-323414
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-323414 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (175.974235ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 kubectl -- --context functional-323414 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-323414 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (30.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-323414 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-323414 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.82970804s)
functional_test.go:776: restart took 30.82983149s for "functional-323414" cluster.
I1210 05:59:24.608184   12588 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (30.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-323414 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-323414 logs: (1.309465538s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3430529358/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-323414 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3430529358/001/logs.txt: (1.307922331s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-323414 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-323414
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-323414: exit status 115 (234.170312ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.87:30635 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-323414 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-323414 delete -f testdata/invalidsvc.yaml: (1.317919183s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-323414 config get cpus: exit status 14 (70.209776ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-323414 config get cpus: exit status 14 (56.708515ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (15.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-323414 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-323414 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 21139: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (15.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-323414 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-323414 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (114.65644ms)

                                                
                                                
-- stdout --
	* [functional-323414] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:59:34.343412   21070 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:59:34.343529   21070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:59:34.343537   21070 out.go:374] Setting ErrFile to fd 2...
	I1210 05:59:34.343541   21070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:59:34.343763   21070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 05:59:34.344282   21070 out.go:368] Setting JSON to false
	I1210 05:59:34.345159   21070 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2518,"bootTime":1765343856,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:59:34.345213   21070 start.go:143] virtualization: kvm guest
	I1210 05:59:34.347430   21070 out.go:179] * [functional-323414] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:59:34.348795   21070 notify.go:221] Checking for updates...
	I1210 05:59:34.348853   21070 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:59:34.350295   21070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:59:34.351645   21070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 05:59:34.352837   21070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 05:59:34.354134   21070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:59:34.355337   21070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:59:34.357130   21070 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 05:59:34.357855   21070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:59:34.394784   21070 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 05:59:34.396075   21070 start.go:309] selected driver: kvm2
	I1210 05:59:34.396093   21070 start.go:927] validating driver "kvm2" against &{Name:functional-323414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-323414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:59:34.396198   21070 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:59:34.398651   21070 out.go:203] 
	W1210 05:59:34.399933   21070 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 05:59:34.401045   21070 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-323414 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-323414 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-323414 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (122.644166ms)

                                                
                                                
-- stdout --
	* [functional-323414] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:59:34.227563   21045 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:59:34.227796   21045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:59:34.227803   21045 out.go:374] Setting ErrFile to fd 2...
	I1210 05:59:34.227807   21045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:59:34.228081   21045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 05:59:34.228544   21045 out.go:368] Setting JSON to false
	I1210 05:59:34.229321   21045 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2518,"bootTime":1765343856,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:59:34.229399   21045 start.go:143] virtualization: kvm guest
	I1210 05:59:34.231351   21045 out.go:179] * [functional-323414] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 05:59:34.232712   21045 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:59:34.232695   21045 notify.go:221] Checking for updates...
	I1210 05:59:34.235086   21045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:59:34.236439   21045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 05:59:34.237669   21045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 05:59:34.239007   21045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:59:34.240442   21045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:59:34.242200   21045 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 05:59:34.242857   21045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:59:34.280167   21045 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1210 05:59:34.281330   21045 start.go:309] selected driver: kvm2
	I1210 05:59:34.281345   21045 start.go:927] validating driver "kvm2" against &{Name:functional-323414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-323414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:59:34.281460   21045 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:59:34.283466   21045 out.go:203] 
	W1210 05:59:34.284784   21045 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 05:59:34.286042   21045 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (31.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-323414 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-323414 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-vbp92" [50651a20-ccd7-481d-b6ca-d3e32113f004] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-vbp92" [50651a20-ccd7-481d-b6ca-d3e32113f004] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 31.00392325s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.87:32618
functional_test.go:1680: http://192.168.39.87:32618: success! body:
Request served by hello-node-connect-9f67c86d4-vbp92

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.87:32618
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (31.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (48.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [71d7c097-e93e-4600-839f-fb9d2f4de92e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003318897s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-323414 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-323414 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-323414 get pvc myclaim -o=json
I1210 05:59:38.300979   12588 retry.go:31] will retry after 2.394703787s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:0c34d23a-d9c5-4ad4-bb47-8b3404b3ec3e ResourceVersion:756 Generation:0 CreationTimestamp:2025-12-10 05:59:38 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00066c230 VolumeMode:0xc00066c240 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-323414 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-323414 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [03d57418-a555-4d72-a02f-a722cbb9386c] Pending
helpers_test.go:353: "sp-pod" [03d57418-a555-4d72-a02f-a722cbb9386c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [03d57418-a555-4d72-a02f-a722cbb9386c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.004322263s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-323414 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-323414 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-323414 delete -f testdata/storage-provisioner/pod.yaml: (1.961411972s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-323414 apply -f testdata/storage-provisioner/pod.yaml
I1210 06:00:03.019457   12588 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [53e16e54-0d05-4c34-9778-65dd926af94f] Pending
helpers_test.go:353: "sp-pod" [53e16e54-0d05-4c34-9778-65dd926af94f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [53e16e54-0d05-4c34-9778-65dd926af94f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.00695022s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-323414 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (48.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh -n functional-323414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 cp functional-323414:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp965772246/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh -n functional-323414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh -n functional-323414 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (39.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-323414 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-lxf4k" [7d53a1b8-f27e-48eb-a0dd-8376e644fa5a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-lxf4k" [7d53a1b8-f27e-48eb-a0dd-8376e644fa5a] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 34.00356646s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-323414 exec mysql-7d7b65bc95-lxf4k -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-323414 exec mysql-7d7b65bc95-lxf4k -- mysql -ppassword -e "show databases;": exit status 1 (185.246425ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 06:00:20.624671   12588 retry.go:31] will retry after 1.441787273s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-323414 exec mysql-7d7b65bc95-lxf4k -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-323414 exec mysql-7d7b65bc95-lxf4k -- mysql -ppassword -e "show databases;": exit status 1 (162.946415ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 06:00:22.229701   12588 retry.go:31] will retry after 1.909125313s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-323414 exec mysql-7d7b65bc95-lxf4k -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-323414 exec mysql-7d7b65bc95-lxf4k -- mysql -ppassword -e "show databases;": exit status 1 (139.212992ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 06:00:24.279495   12588 retry.go:31] will retry after 1.238684344s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-323414 exec mysql-7d7b65bc95-lxf4k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (39.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12588/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo cat /etc/test/nested/copy/12588/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12588.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo cat /etc/ssl/certs/12588.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12588.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo cat /usr/share/ca-certificates/12588.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/125882.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo cat /etc/ssl/certs/125882.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/125882.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo cat /usr/share/ca-certificates/125882.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-323414 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-323414 ssh "sudo systemctl is-active docker": exit status 1 (195.332226ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-323414 ssh "sudo systemctl is-active containerd": exit status 1 (183.577433ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-323414 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-323414 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-flxwn" [bcd53522-c862-4fc2-8b63-2a5261aa5e3b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-flxwn" [bcd53522-c862-4fc2-8b63-2a5261aa5e3b] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.007993436s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "269.042076ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.124417ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "293.701944ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.559291ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3321620181/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765346373179234624" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3321620181/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765346373179234624" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3321620181/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765346373179234624" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3321620181/001/test-1765346373179234624
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (178.907667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:59:33.358471   12588 retry.go:31] will retry after 531.947352ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 05:59 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 05:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 05:59 test-1765346373179234624
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh cat /mount-9p/test-1765346373179234624
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-323414 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [144e1d8d-4411-46fb-80cf-8038ad1aee28] Pending
helpers_test.go:353: "busybox-mount" [144e1d8d-4411-46fb-80cf-8038ad1aee28] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [144e1d8d-4411-46fb-80cf-8038ad1aee28] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [144e1d8d-4411-46fb-80cf-8038ad1aee28] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003586197s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-323414 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh stat /mount-9p/created-by-pod
I1210 05:59:40.931052   12588 detect.go:223] nested VM detected
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3321620181/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3185082878/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (193.226241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:59:41.520088   12588 retry.go:31] will retry after 425.090341ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3185082878/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-323414 ssh "sudo umount -f /mount-9p": exit status 1 (194.597324ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-323414 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3185082878/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 service list -o json
functional_test.go:1504: Took "454.177963ms" to run "out/minikube-linux-amd64 -p functional-323414 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.87:31300
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1924682867/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1924682867/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1924682867/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T" /mount1: exit status 1 (248.981243ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:59:42.989784   12588 retry.go:31] will retry after 697.309559ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-323414 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1924682867/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1924682867/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-323414 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1924682867/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.87:31300
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-323414 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-323414
localhost/kicbase/echo-server:functional-323414
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-323414 image ls --format short --alsologtostderr:
I1210 05:59:54.522573   21973 out.go:360] Setting OutFile to fd 1 ...
I1210 05:59:54.522687   21973 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:59:54.522699   21973 out.go:374] Setting ErrFile to fd 2...
I1210 05:59:54.522704   21973 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:59:54.522917   21973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
I1210 05:59:54.523528   21973 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:59:54.523649   21973 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:59:54.525904   21973 ssh_runner.go:195] Run: systemctl --version
I1210 05:59:54.527935   21973 main.go:143] libmachine: domain functional-323414 has defined MAC address 52:54:00:b3:43:8b in network mk-functional-323414
I1210 05:59:54.528313   21973 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:43:8b", ip: ""} in network mk-functional-323414: {Iface:virbr1 ExpiryTime:2025-12-10 06:56:53 +0000 UTC Type:0 Mac:52:54:00:b3:43:8b Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:functional-323414 Clientid:01:52:54:00:b3:43:8b}
I1210 05:59:54.528342   21973 main.go:143] libmachine: domain functional-323414 has defined IP address 192.168.39.87 and MAC address 52:54:00:b3:43:8b in network mk-functional-323414
I1210 05:59:54.528474   21973 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/functional-323414/id_rsa Username:docker}
I1210 05:59:54.607581   21973 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-323414 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-323414  │ 078cc6e3a159b │ 1.47MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-323414  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-323414  │ 626cd4949fbde │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-323414 image ls --format table --alsologtostderr:
I1210 05:59:58.471498   22055 out.go:360] Setting OutFile to fd 1 ...
I1210 05:59:58.471754   22055 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:59:58.471763   22055 out.go:374] Setting ErrFile to fd 2...
I1210 05:59:58.471767   22055 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:59:58.471959   22055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
I1210 05:59:58.472488   22055 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:59:58.472578   22055 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:59:58.474809   22055 ssh_runner.go:195] Run: systemctl --version
I1210 05:59:58.476974   22055 main.go:143] libmachine: domain functional-323414 has defined MAC address 52:54:00:b3:43:8b in network mk-functional-323414
I1210 05:59:58.477388   22055 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:43:8b", ip: ""} in network mk-functional-323414: {Iface:virbr1 ExpiryTime:2025-12-10 06:56:53 +0000 UTC Type:0 Mac:52:54:00:b3:43:8b Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:functional-323414 Clientid:01:52:54:00:b3:43:8b}
I1210 05:59:58.477413   22055 main.go:143] libmachine: domain functional-323414 has defined IP address 192.168.39.87 and MAC address 52:54:00:b3:43:8b in network mk-functional-323414
I1210 05:59:58.477559   22055 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/functional-323414/id_rsa Username:docker}
I1210 05:59:58.556229   22055 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-323414 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"078cc6e3a159bdce792466663fd5161080f66502dc50576189dad57749bc7780","repoDigests":["localhost/my-image@sha256:13cea28c616f7e667b9a9ee0e5a7468149622391a067040019b609915a4a9602"],"repoTags":["localhost/my-image:functional-323414"],"size":"1468600"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"7687253
5"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce
05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-323414"],"size":"4944818"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.
k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"626cd4949fbde223b90c4a67fd53cd1ee59595ec3dfa8fac31712a43d2af2431","repoDigests":["localhost/minikub
e-local-cache-test@sha256:0e7907a9c660c79b9ca0564bbcb8a6b190fe0721ca61ea0dbf38728322d4a934"],"repoTags":["localhost/minikube-local-cache-test:functional-323414"],"size":"3330"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de5
4dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b
5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"0681f9995aaa196d8afbee1dd98f1af793b35efc2a7de27d2f3a13669c8befaa","repoDigests":["docker.io/library/4fd9954a6399db33a761e3798b7e4b0533b8e1f4bdd201564b7c6f7e5b43f2c1-tmp@sha256:f8fa3ab30501a33017662e70a97c4d64747151869c2e8a4007a55ca7ec85879c"],"repoTags":[],"size":"1466018"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54
242145"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-323414 image ls --format json --alsologtostderr:
I1210 05:59:58.255914   22044 out.go:360] Setting OutFile to fd 1 ...
I1210 05:59:58.256163   22044 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:59:58.256173   22044 out.go:374] Setting ErrFile to fd 2...
I1210 05:59:58.256178   22044 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:59:58.256428   22044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
I1210 05:59:58.257012   22044 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:59:58.257118   22044 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:59:58.259293   22044 ssh_runner.go:195] Run: systemctl --version
I1210 05:59:58.261830   22044 main.go:143] libmachine: domain functional-323414 has defined MAC address 52:54:00:b3:43:8b in network mk-functional-323414
I1210 05:59:58.262284   22044 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:43:8b", ip: ""} in network mk-functional-323414: {Iface:virbr1 ExpiryTime:2025-12-10 06:56:53 +0000 UTC Type:0 Mac:52:54:00:b3:43:8b Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:functional-323414 Clientid:01:52:54:00:b3:43:8b}
I1210 05:59:58.262310   22044 main.go:143] libmachine: domain functional-323414 has defined IP address 192.168.39.87 and MAC address 52:54:00:b3:43:8b in network mk-functional-323414
I1210 05:59:58.262486   22044 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/functional-323414/id_rsa Username:docker}
I1210 05:59:58.343194   22044 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-323414 image ls --format yaml --alsologtostderr:
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-323414
size: "4944818"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 626cd4949fbde223b90c4a67fd53cd1ee59595ec3dfa8fac31712a43d2af2431
repoDigests:
- localhost/minikube-local-cache-test@sha256:0e7907a9c660c79b9ca0564bbcb8a6b190fe0721ca61ea0dbf38728322d4a934
repoTags:
- localhost/minikube-local-cache-test:functional-323414
size: "3330"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-323414 image ls --format yaml --alsologtostderr:
I1210 05:59:54.705176   21984 out.go:360] Setting OutFile to fd 1 ...
I1210 05:59:54.705294   21984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:59:54.705303   21984 out.go:374] Setting ErrFile to fd 2...
I1210 05:59:54.705307   21984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:59:54.705527   21984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
I1210 05:59:54.706038   21984 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:59:54.706126   21984 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:59:54.708495   21984 ssh_runner.go:195] Run: systemctl --version
I1210 05:59:54.710897   21984 main.go:143] libmachine: domain functional-323414 has defined MAC address 52:54:00:b3:43:8b in network mk-functional-323414
I1210 05:59:54.711368   21984 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:43:8b", ip: ""} in network mk-functional-323414: {Iface:virbr1 ExpiryTime:2025-12-10 06:56:53 +0000 UTC Type:0 Mac:52:54:00:b3:43:8b Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:functional-323414 Clientid:01:52:54:00:b3:43:8b}
I1210 05:59:54.711407   21984 main.go:143] libmachine: domain functional-323414 has defined IP address 192.168.39.87 and MAC address 52:54:00:b3:43:8b in network mk-functional-323414
I1210 05:59:54.711602   21984 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/functional-323414/id_rsa Username:docker}
I1210 05:59:54.791826   21984 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-323414 ssh pgrep buildkitd: exit status 1 (150.081679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image build -t localhost/my-image:functional-323414 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-323414 image build -t localhost/my-image:functional-323414 testdata/build --alsologtostderr: (3.034230578s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-323414 image build -t localhost/my-image:functional-323414 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0681f9995aa
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-323414
--> 078cc6e3a15
Successfully tagged localhost/my-image:functional-323414
078cc6e3a159bdce792466663fd5161080f66502dc50576189dad57749bc7780
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-323414 image build -t localhost/my-image:functional-323414 testdata/build --alsologtostderr:
I1210 05:59:55.043733   22006 out.go:360] Setting OutFile to fd 1 ...
I1210 05:59:55.043990   22006 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:59:55.044001   22006 out.go:374] Setting ErrFile to fd 2...
I1210 05:59:55.044005   22006 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:59:55.044218   22006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
I1210 05:59:55.045838   22006 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:59:55.046517   22006 config.go:182] Loaded profile config "functional-323414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:59:55.049016   22006 ssh_runner.go:195] Run: systemctl --version
I1210 05:59:55.052102   22006 main.go:143] libmachine: domain functional-323414 has defined MAC address 52:54:00:b3:43:8b in network mk-functional-323414
I1210 05:59:55.052594   22006 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:43:8b", ip: ""} in network mk-functional-323414: {Iface:virbr1 ExpiryTime:2025-12-10 06:56:53 +0000 UTC Type:0 Mac:52:54:00:b3:43:8b Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:functional-323414 Clientid:01:52:54:00:b3:43:8b}
I1210 05:59:55.052628   22006 main.go:143] libmachine: domain functional-323414 has defined IP address 192.168.39.87 and MAC address 52:54:00:b3:43:8b in network mk-functional-323414
I1210 05:59:55.052881   22006 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/functional-323414/id_rsa Username:docker}
I1210 05:59:55.132236   22006 build_images.go:162] Building image from path: /tmp/build.3402666374.tar
I1210 05:59:55.132318   22006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 05:59:55.145063   22006 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3402666374.tar
I1210 05:59:55.150499   22006 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3402666374.tar: stat -c "%s %y" /var/lib/minikube/build/build.3402666374.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3402666374.tar': No such file or directory
I1210 05:59:55.150536   22006 ssh_runner.go:362] scp /tmp/build.3402666374.tar --> /var/lib/minikube/build/build.3402666374.tar (3072 bytes)
I1210 05:59:55.181926   22006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3402666374
I1210 05:59:55.194767   22006 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3402666374 -xf /var/lib/minikube/build/build.3402666374.tar
I1210 05:59:55.207536   22006 crio.go:315] Building image: /var/lib/minikube/build/build.3402666374
I1210 05:59:55.207615   22006 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-323414 /var/lib/minikube/build/build.3402666374 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 05:59:57.974621   22006 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-323414 /var/lib/minikube/build/build.3402666374 --cgroup-manager=cgroupfs: (2.766974922s)
I1210 05:59:57.974727   22006 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3402666374
I1210 05:59:57.995132   22006 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3402666374.tar
I1210 05:59:58.007589   22006 build_images.go:218] Built localhost/my-image:functional-323414 from /tmp/build.3402666374.tar
I1210 05:59:58.007625   22006 build_images.go:134] succeeded building to: functional-323414
I1210 05:59:58.007632   22006 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.234165373s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-323414
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image load --daemon kicbase/echo-server:functional-323414 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-323414 image load --daemon kicbase/echo-server:functional-323414 --alsologtostderr: (1.009004117s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image load --daemon kicbase/echo-server:functional-323414 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
2025/12/10 05:59:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-323414
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image load --daemon kicbase/echo-server:functional-323414 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image save kicbase/echo-server:functional-323414 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image rm kicbase/echo-server:functional-323414 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-323414 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.191372229s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-323414
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-323414 image save --daemon kicbase/echo-server:functional-323414 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-323414
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-323414
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-323414
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-323414
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1210 06:00:44.979478   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:44.985903   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:44.997401   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:45.018847   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:45.060250   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:45.142220   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:45.304066   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:45.625817   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:46.267894   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:47.549854   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:50.112269   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:55.234487   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:01:05.475904   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:01:25.957681   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:01:44.186780   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:02:06.919681   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:03:28.841764   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m28.550477699s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (209.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 kubectl -- rollout status deployment/busybox: (4.500846106s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-4szgk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-s2ztg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-vh8fq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-4szgk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-s2ztg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-vh8fq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-4szgk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-s2ztg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-vh8fq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-4szgk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-4szgk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-s2ztg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-s2ztg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-vh8fq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 kubectl -- exec busybox-7b57f96db7-vh8fq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 node add --alsologtostderr -v 5
E1210 06:04:32.045256   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:32.051661   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:32.063089   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:32.084482   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:32.125896   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:32.207404   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:32.368969   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:32.690759   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:33.332213   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:34.614084   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:37.176893   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:42.299032   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 node add --alsologtostderr -v 5: (42.707353364s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-426453 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp testdata/cp-test.txt ha-426453:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037373319/001/cp-test_ha-426453.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453:/home/docker/cp-test.txt ha-426453-m02:/home/docker/cp-test_ha-426453_ha-426453-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m02 "sudo cat /home/docker/cp-test_ha-426453_ha-426453-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453:/home/docker/cp-test.txt ha-426453-m03:/home/docker/cp-test_ha-426453_ha-426453-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m03 "sudo cat /home/docker/cp-test_ha-426453_ha-426453-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453:/home/docker/cp-test.txt ha-426453-m04:/home/docker/cp-test_ha-426453_ha-426453-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m04 "sudo cat /home/docker/cp-test_ha-426453_ha-426453-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp testdata/cp-test.txt ha-426453-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037373319/001/cp-test_ha-426453-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m02:/home/docker/cp-test.txt ha-426453:/home/docker/cp-test_ha-426453-m02_ha-426453.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453 "sudo cat /home/docker/cp-test_ha-426453-m02_ha-426453.txt"
E1210 06:04:52.540711   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m02:/home/docker/cp-test.txt ha-426453-m03:/home/docker/cp-test_ha-426453-m02_ha-426453-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m03 "sudo cat /home/docker/cp-test_ha-426453-m02_ha-426453-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m02:/home/docker/cp-test.txt ha-426453-m04:/home/docker/cp-test_ha-426453-m02_ha-426453-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m04 "sudo cat /home/docker/cp-test_ha-426453-m02_ha-426453-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp testdata/cp-test.txt ha-426453-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037373319/001/cp-test_ha-426453-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m03:/home/docker/cp-test.txt ha-426453:/home/docker/cp-test_ha-426453-m03_ha-426453.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453 "sudo cat /home/docker/cp-test_ha-426453-m03_ha-426453.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m03:/home/docker/cp-test.txt ha-426453-m02:/home/docker/cp-test_ha-426453-m03_ha-426453-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m02 "sudo cat /home/docker/cp-test_ha-426453-m03_ha-426453-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m03:/home/docker/cp-test.txt ha-426453-m04:/home/docker/cp-test_ha-426453-m03_ha-426453-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m04 "sudo cat /home/docker/cp-test_ha-426453-m03_ha-426453-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp testdata/cp-test.txt ha-426453-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037373319/001/cp-test_ha-426453-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m04:/home/docker/cp-test.txt ha-426453:/home/docker/cp-test_ha-426453-m04_ha-426453.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453 "sudo cat /home/docker/cp-test_ha-426453-m04_ha-426453.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m04:/home/docker/cp-test.txt ha-426453-m02:/home/docker/cp-test_ha-426453-m04_ha-426453-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m02 "sudo cat /home/docker/cp-test_ha-426453-m04_ha-426453-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 cp ha-426453-m04:/home/docker/cp-test.txt ha-426453-m03:/home/docker/cp-test_ha-426453-m04_ha-426453-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 ssh -n ha-426453-m03 "sudo cat /home/docker/cp-test_ha-426453-m04_ha-426453-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (90.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 node stop m02 --alsologtostderr -v 5
E1210 06:05:13.022866   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:05:44.979385   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:05:53.985643   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:06:12.683381   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 node stop m02 --alsologtostderr -v 5: (1m29.522856412s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-426453 status --alsologtostderr -v 5: exit status 7 (508.730053ms)

                                                
                                                
-- stdout --
	ha-426453
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-426453-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-426453-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-426453-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:06:28.619385   25229 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:06:28.619504   25229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:06:28.619516   25229 out.go:374] Setting ErrFile to fd 2...
	I1210 06:06:28.619522   25229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:06:28.619739   25229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:06:28.619919   25229 out.go:368] Setting JSON to false
	I1210 06:06:28.619944   25229 mustload.go:66] Loading cluster: ha-426453
	I1210 06:06:28.620094   25229 notify.go:221] Checking for updates...
	I1210 06:06:28.620262   25229 config.go:182] Loaded profile config "ha-426453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:06:28.620273   25229 status.go:174] checking status of ha-426453 ...
	I1210 06:06:28.623271   25229 status.go:371] ha-426453 host status = "Running" (err=<nil>)
	I1210 06:06:28.623297   25229 host.go:66] Checking if "ha-426453" exists ...
	I1210 06:06:28.626601   25229 main.go:143] libmachine: domain ha-426453 has defined MAC address 52:54:00:0d:3e:ed in network mk-ha-426453
	I1210 06:06:28.627198   25229 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0d:3e:ed", ip: ""} in network mk-ha-426453: {Iface:virbr1 ExpiryTime:2025-12-10 07:00:41 +0000 UTC Type:0 Mac:52:54:00:0d:3e:ed Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-426453 Clientid:01:52:54:00:0d:3e:ed}
	I1210 06:06:28.627229   25229 main.go:143] libmachine: domain ha-426453 has defined IP address 192.168.39.168 and MAC address 52:54:00:0d:3e:ed in network mk-ha-426453
	I1210 06:06:28.627434   25229 host.go:66] Checking if "ha-426453" exists ...
	I1210 06:06:28.627733   25229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:06:28.630403   25229 main.go:143] libmachine: domain ha-426453 has defined MAC address 52:54:00:0d:3e:ed in network mk-ha-426453
	I1210 06:06:28.630817   25229 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0d:3e:ed", ip: ""} in network mk-ha-426453: {Iface:virbr1 ExpiryTime:2025-12-10 07:00:41 +0000 UTC Type:0 Mac:52:54:00:0d:3e:ed Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-426453 Clientid:01:52:54:00:0d:3e:ed}
	I1210 06:06:28.630852   25229 main.go:143] libmachine: domain ha-426453 has defined IP address 192.168.39.168 and MAC address 52:54:00:0d:3e:ed in network mk-ha-426453
	I1210 06:06:28.631032   25229 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/ha-426453/id_rsa Username:docker}
	I1210 06:06:28.722389   25229 ssh_runner.go:195] Run: systemctl --version
	I1210 06:06:28.730284   25229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:06:28.748893   25229 kubeconfig.go:125] found "ha-426453" server: "https://192.168.39.254:8443"
	I1210 06:06:28.748934   25229 api_server.go:166] Checking apiserver status ...
	I1210 06:06:28.748993   25229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:28.771082   25229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup
	W1210 06:06:28.782845   25229 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:06:28.782933   25229 ssh_runner.go:195] Run: ls
	I1210 06:06:28.788384   25229 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1210 06:06:28.793159   25229 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1210 06:06:28.793189   25229 status.go:463] ha-426453 apiserver status = Running (err=<nil>)
	I1210 06:06:28.793200   25229 status.go:176] ha-426453 status: &{Name:ha-426453 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:06:28.793219   25229 status.go:174] checking status of ha-426453-m02 ...
	I1210 06:06:28.794993   25229 status.go:371] ha-426453-m02 host status = "Stopped" (err=<nil>)
	I1210 06:06:28.795012   25229 status.go:384] host is not running, skipping remaining checks
	I1210 06:06:28.795018   25229 status.go:176] ha-426453-m02 status: &{Name:ha-426453-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:06:28.795031   25229 status.go:174] checking status of ha-426453-m03 ...
	I1210 06:06:28.796276   25229 status.go:371] ha-426453-m03 host status = "Running" (err=<nil>)
	I1210 06:06:28.796300   25229 host.go:66] Checking if "ha-426453-m03" exists ...
	I1210 06:06:28.798875   25229 main.go:143] libmachine: domain ha-426453-m03 has defined MAC address 52:54:00:f5:7d:6e in network mk-ha-426453
	I1210 06:06:28.799319   25229 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f5:7d:6e", ip: ""} in network mk-ha-426453: {Iface:virbr1 ExpiryTime:2025-12-10 07:02:40 +0000 UTC Type:0 Mac:52:54:00:f5:7d:6e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-426453-m03 Clientid:01:52:54:00:f5:7d:6e}
	I1210 06:06:28.799340   25229 main.go:143] libmachine: domain ha-426453-m03 has defined IP address 192.168.39.230 and MAC address 52:54:00:f5:7d:6e in network mk-ha-426453
	I1210 06:06:28.799585   25229 host.go:66] Checking if "ha-426453-m03" exists ...
	I1210 06:06:28.799818   25229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:06:28.801872   25229 main.go:143] libmachine: domain ha-426453-m03 has defined MAC address 52:54:00:f5:7d:6e in network mk-ha-426453
	I1210 06:06:28.802200   25229 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f5:7d:6e", ip: ""} in network mk-ha-426453: {Iface:virbr1 ExpiryTime:2025-12-10 07:02:40 +0000 UTC Type:0 Mac:52:54:00:f5:7d:6e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-426453-m03 Clientid:01:52:54:00:f5:7d:6e}
	I1210 06:06:28.802226   25229 main.go:143] libmachine: domain ha-426453-m03 has defined IP address 192.168.39.230 and MAC address 52:54:00:f5:7d:6e in network mk-ha-426453
	I1210 06:06:28.802394   25229 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/ha-426453-m03/id_rsa Username:docker}
	I1210 06:06:28.890506   25229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:06:28.909884   25229 kubeconfig.go:125] found "ha-426453" server: "https://192.168.39.254:8443"
	I1210 06:06:28.909915   25229 api_server.go:166] Checking apiserver status ...
	I1210 06:06:28.909950   25229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:28.930410   25229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1828/cgroup
	W1210 06:06:28.942599   25229 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1828/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:06:28.942651   25229 ssh_runner.go:195] Run: ls
	I1210 06:06:28.948179   25229 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1210 06:06:28.953270   25229 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1210 06:06:28.953293   25229 status.go:463] ha-426453-m03 apiserver status = Running (err=<nil>)
	I1210 06:06:28.953303   25229 status.go:176] ha-426453-m03 status: &{Name:ha-426453-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:06:28.953321   25229 status.go:174] checking status of ha-426453-m04 ...
	I1210 06:06:28.955124   25229 status.go:371] ha-426453-m04 host status = "Running" (err=<nil>)
	I1210 06:06:28.955157   25229 host.go:66] Checking if "ha-426453-m04" exists ...
	I1210 06:06:28.958006   25229 main.go:143] libmachine: domain ha-426453-m04 has defined MAC address 52:54:00:9c:b3:71 in network mk-ha-426453
	I1210 06:06:28.958602   25229 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:b3:71", ip: ""} in network mk-ha-426453: {Iface:virbr1 ExpiryTime:2025-12-10 07:04:19 +0000 UTC Type:0 Mac:52:54:00:9c:b3:71 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-426453-m04 Clientid:01:52:54:00:9c:b3:71}
	I1210 06:06:28.958643   25229 main.go:143] libmachine: domain ha-426453-m04 has defined IP address 192.168.39.26 and MAC address 52:54:00:9c:b3:71 in network mk-ha-426453
	I1210 06:06:28.958814   25229 host.go:66] Checking if "ha-426453-m04" exists ...
	I1210 06:06:28.959138   25229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:06:28.961635   25229 main.go:143] libmachine: domain ha-426453-m04 has defined MAC address 52:54:00:9c:b3:71 in network mk-ha-426453
	I1210 06:06:28.962173   25229 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:b3:71", ip: ""} in network mk-ha-426453: {Iface:virbr1 ExpiryTime:2025-12-10 07:04:19 +0000 UTC Type:0 Mac:52:54:00:9c:b3:71 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-426453-m04 Clientid:01:52:54:00:9c:b3:71}
	I1210 06:06:28.962207   25229 main.go:143] libmachine: domain ha-426453-m04 has defined IP address 192.168.39.26 and MAC address 52:54:00:9c:b3:71 in network mk-ha-426453
	I1210 06:06:28.962403   25229 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/ha-426453-m04/id_rsa Username:docker}
	I1210 06:06:29.048049   25229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:06:29.067324   25229 status.go:176] ha-426453-m04 status: &{Name:ha-426453-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (90.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 node start m02 --alsologtostderr -v 5
E1210 06:06:44.186460   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 node start m02 --alsologtostderr -v 5: (43.692103499s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 stop --alsologtostderr -v 5
E1210 06:07:15.907636   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:08:07.260983   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:09:32.050104   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:09:59.752549   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:10:44.980164   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 stop --alsologtostderr -v 5: (4m13.598355734s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 start --wait true --alsologtostderr -v 5
E1210 06:11:44.185654   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 start --wait true --alsologtostderr -v 5: (1m55.152921028s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 node delete m03 --alsologtostderr -v 5: (17.683093856s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (238.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 stop --alsologtostderr -v 5
E1210 06:14:32.046518   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:15:44.979803   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:44.186182   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:08.047793   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 stop --alsologtostderr -v 5: (3m58.003322033s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-426453 status --alsologtostderr -v 5: exit status 7 (63.865047ms)

                                                
                                                
-- stdout --
	ha-426453
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-426453-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-426453-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:17:40.498638   28522 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:17:40.498769   28522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:17:40.498778   28522 out.go:374] Setting ErrFile to fd 2...
	I1210 06:17:40.498782   28522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:17:40.498971   28522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:17:40.499141   28522 out.go:368] Setting JSON to false
	I1210 06:17:40.499164   28522 mustload.go:66] Loading cluster: ha-426453
	I1210 06:17:40.499335   28522 notify.go:221] Checking for updates...
	I1210 06:17:40.500100   28522 config.go:182] Loaded profile config "ha-426453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:17:40.500135   28522 status.go:174] checking status of ha-426453 ...
	I1210 06:17:40.502818   28522 status.go:371] ha-426453 host status = "Stopped" (err=<nil>)
	I1210 06:17:40.502836   28522 status.go:384] host is not running, skipping remaining checks
	I1210 06:17:40.502841   28522 status.go:176] ha-426453 status: &{Name:ha-426453 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:17:40.502858   28522 status.go:174] checking status of ha-426453-m02 ...
	I1210 06:17:40.504126   28522 status.go:371] ha-426453-m02 host status = "Stopped" (err=<nil>)
	I1210 06:17:40.504140   28522 status.go:384] host is not running, skipping remaining checks
	I1210 06:17:40.504144   28522 status.go:176] ha-426453-m02 status: &{Name:ha-426453-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:17:40.504155   28522 status.go:174] checking status of ha-426453-m04 ...
	I1210 06:17:40.505345   28522 status.go:371] ha-426453-m04 host status = "Stopped" (err=<nil>)
	I1210 06:17:40.505369   28522 status.go:384] host is not running, skipping remaining checks
	I1210 06:17:40.505375   28522 status.go:176] ha-426453-m04 status: &{Name:ha-426453-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (238.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (89.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m28.676581233s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (89.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 node add --control-plane --alsologtostderr -v 5
E1210 06:19:32.046214   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-426453 node add --control-plane --alsologtostderr -v 5: (1m19.58351028s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-426453 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.71s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-126561 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1210 06:20:44.979595   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:55.114506   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:21:44.186560   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-126561 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.709900313s)
--- PASS: TestJSONOutput/start/Command (74.71s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-126561 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-126561 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-126561 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-126561 --output=json --user=testUser: (7.429436649s)
--- PASS: TestJSONOutput/stop/Command (7.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-809595 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-809595 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (78.003764ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f84102b5-2a83-470c-859c-f6481b480e25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-809595] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"911cb4b7-64b4-4264-97b2-5e0ef0fea4f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22089"}}
	{"specversion":"1.0","id":"137dffdf-4ea8-4dcc-983a-70c44d2466a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f64bba83-23de-4ea5-8780-81e3bb2fe2aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig"}}
	{"specversion":"1.0","id":"1a534623-e9da-45ef-b6b1-0e903398e1f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube"}}
	{"specversion":"1.0","id":"1cdfe51a-d821-4d3c-81f2-9c33c60acba4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e9be70b9-9e6c-40c6-86ed-433d9a8aaa48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"79375f7b-9ccd-4a7d-bde2-5159062ed0c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-809595" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-809595
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (77.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-631562 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-631562 --driver=kvm2  --container-runtime=crio: (36.053352212s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-633846 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-633846 --driver=kvm2  --container-runtime=crio: (39.336065053s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-631562
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-633846
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-633846" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-633846
helpers_test.go:176: Cleaning up "first-631562" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-631562
--- PASS: TestMinikubeProfile (77.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-673411 --memory=3072 --mount-string /tmp/TestMountStartserial3393520757/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-673411 --memory=3072 --mount-string /tmp/TestMountStartserial3393520757/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.568375614s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-673411 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-673411 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-689643 --memory=3072 --mount-string /tmp/TestMountStartserial3393520757/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-689643 --memory=3072 --mount-string /tmp/TestMountStartserial3393520757/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.061354854s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689643 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689643 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-673411 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689643 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689643 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-689643
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-689643: (1.210162544s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-689643
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-689643: (17.572910273s)
--- PASS: TestMountStart/serial/RestartStopped (18.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689643 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689643 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (97.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-140746 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1210 06:24:32.047089   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:47.262551   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:25:44.979697   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-140746 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m37.113356742s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (97.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-140746 -- rollout status deployment/busybox: (4.317133263s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- exec busybox-7b57f96db7-c9664 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- exec busybox-7b57f96db7-fnwkb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- exec busybox-7b57f96db7-c9664 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- exec busybox-7b57f96db7-fnwkb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- exec busybox-7b57f96db7-c9664 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- exec busybox-7b57f96db7-fnwkb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- exec busybox-7b57f96db7-c9664 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- exec busybox-7b57f96db7-c9664 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- exec busybox-7b57f96db7-fnwkb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140746 -- exec busybox-7b57f96db7-fnwkb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-140746 -v=5 --alsologtostderr
E1210 06:26:44.185837   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-140746 -v=5 --alsologtostderr: (41.924797288s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-140746 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp testdata/cp-test.txt multinode-140746:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp multinode-140746:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile193889344/001/cp-test_multinode-140746.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp multinode-140746:/home/docker/cp-test.txt multinode-140746-m02:/home/docker/cp-test_multinode-140746_multinode-140746-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m02 "sudo cat /home/docker/cp-test_multinode-140746_multinode-140746-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp multinode-140746:/home/docker/cp-test.txt multinode-140746-m03:/home/docker/cp-test_multinode-140746_multinode-140746-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m03 "sudo cat /home/docker/cp-test_multinode-140746_multinode-140746-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp testdata/cp-test.txt multinode-140746-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp multinode-140746-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile193889344/001/cp-test_multinode-140746-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp multinode-140746-m02:/home/docker/cp-test.txt multinode-140746:/home/docker/cp-test_multinode-140746-m02_multinode-140746.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746 "sudo cat /home/docker/cp-test_multinode-140746-m02_multinode-140746.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp multinode-140746-m02:/home/docker/cp-test.txt multinode-140746-m03:/home/docker/cp-test_multinode-140746-m02_multinode-140746-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m03 "sudo cat /home/docker/cp-test_multinode-140746-m02_multinode-140746-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp testdata/cp-test.txt multinode-140746-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp multinode-140746-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile193889344/001/cp-test_multinode-140746-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp multinode-140746-m03:/home/docker/cp-test.txt multinode-140746:/home/docker/cp-test_multinode-140746-m03_multinode-140746.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746 "sudo cat /home/docker/cp-test_multinode-140746-m03_multinode-140746.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 cp multinode-140746-m03:/home/docker/cp-test.txt multinode-140746-m02:/home/docker/cp-test_multinode-140746-m03_multinode-140746-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 ssh -n multinode-140746-m02 "sudo cat /home/docker/cp-test_multinode-140746-m03_multinode-140746-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-140746 node stop m03: (1.623985564s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-140746 status: exit status 7 (324.109984ms)

                                                
                                                
-- stdout --
	multinode-140746
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-140746-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-140746-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-140746 status --alsologtostderr: exit status 7 (335.567564ms)

                                                
                                                
-- stdout --
	multinode-140746
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-140746-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-140746-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:26:54.778105   34393 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:26:54.778327   34393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:26:54.778335   34393 out.go:374] Setting ErrFile to fd 2...
	I1210 06:26:54.778339   34393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:26:54.778512   34393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:26:54.778665   34393 out.go:368] Setting JSON to false
	I1210 06:26:54.778688   34393 mustload.go:66] Loading cluster: multinode-140746
	I1210 06:26:54.778762   34393 notify.go:221] Checking for updates...
	I1210 06:26:54.779083   34393 config.go:182] Loaded profile config "multinode-140746": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:26:54.779106   34393 status.go:174] checking status of multinode-140746 ...
	I1210 06:26:54.781207   34393 status.go:371] multinode-140746 host status = "Running" (err=<nil>)
	I1210 06:26:54.781223   34393 host.go:66] Checking if "multinode-140746" exists ...
	I1210 06:26:54.784124   34393 main.go:143] libmachine: domain multinode-140746 has defined MAC address 52:54:00:55:75:e1 in network mk-multinode-140746
	I1210 06:26:54.784625   34393 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:75:e1", ip: ""} in network mk-multinode-140746: {Iface:virbr1 ExpiryTime:2025-12-10 07:24:34 +0000 UTC Type:0 Mac:52:54:00:55:75:e1 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-140746 Clientid:01:52:54:00:55:75:e1}
	I1210 06:26:54.784654   34393 main.go:143] libmachine: domain multinode-140746 has defined IP address 192.168.39.217 and MAC address 52:54:00:55:75:e1 in network mk-multinode-140746
	I1210 06:26:54.784837   34393 host.go:66] Checking if "multinode-140746" exists ...
	I1210 06:26:54.785046   34393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:26:54.787283   34393 main.go:143] libmachine: domain multinode-140746 has defined MAC address 52:54:00:55:75:e1 in network mk-multinode-140746
	I1210 06:26:54.787646   34393 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:75:e1", ip: ""} in network mk-multinode-140746: {Iface:virbr1 ExpiryTime:2025-12-10 07:24:34 +0000 UTC Type:0 Mac:52:54:00:55:75:e1 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-140746 Clientid:01:52:54:00:55:75:e1}
	I1210 06:26:54.787670   34393 main.go:143] libmachine: domain multinode-140746 has defined IP address 192.168.39.217 and MAC address 52:54:00:55:75:e1 in network mk-multinode-140746
	I1210 06:26:54.787809   34393 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/multinode-140746/id_rsa Username:docker}
	I1210 06:26:54.877545   34393 ssh_runner.go:195] Run: systemctl --version
	I1210 06:26:54.883882   34393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:26:54.900720   34393 kubeconfig.go:125] found "multinode-140746" server: "https://192.168.39.217:8443"
	I1210 06:26:54.900761   34393 api_server.go:166] Checking apiserver status ...
	I1210 06:26:54.900811   34393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:26:54.919942   34393 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup
	W1210 06:26:54.930945   34393 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:26:54.931009   34393 ssh_runner.go:195] Run: ls
	I1210 06:26:54.935913   34393 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I1210 06:26:54.940654   34393 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I1210 06:26:54.940685   34393 status.go:463] multinode-140746 apiserver status = Running (err=<nil>)
	I1210 06:26:54.940697   34393 status.go:176] multinode-140746 status: &{Name:multinode-140746 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:26:54.940720   34393 status.go:174] checking status of multinode-140746-m02 ...
	I1210 06:26:54.942125   34393 status.go:371] multinode-140746-m02 host status = "Running" (err=<nil>)
	I1210 06:26:54.942144   34393 host.go:66] Checking if "multinode-140746-m02" exists ...
	I1210 06:26:54.944342   34393 main.go:143] libmachine: domain multinode-140746-m02 has defined MAC address 52:54:00:8c:dd:9d in network mk-multinode-140746
	I1210 06:26:54.944785   34393 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:dd:9d", ip: ""} in network mk-multinode-140746: {Iface:virbr1 ExpiryTime:2025-12-10 07:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:dd:9d Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:multinode-140746-m02 Clientid:01:52:54:00:8c:dd:9d}
	I1210 06:26:54.944813   34393 main.go:143] libmachine: domain multinode-140746-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:8c:dd:9d in network mk-multinode-140746
	I1210 06:26:54.944966   34393 host.go:66] Checking if "multinode-140746-m02" exists ...
	I1210 06:26:54.945155   34393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:26:54.947062   34393 main.go:143] libmachine: domain multinode-140746-m02 has defined MAC address 52:54:00:8c:dd:9d in network mk-multinode-140746
	I1210 06:26:54.947379   34393 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:dd:9d", ip: ""} in network mk-multinode-140746: {Iface:virbr1 ExpiryTime:2025-12-10 07:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:dd:9d Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:multinode-140746-m02 Clientid:01:52:54:00:8c:dd:9d}
	I1210 06:26:54.947403   34393 main.go:143] libmachine: domain multinode-140746-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:8c:dd:9d in network mk-multinode-140746
	I1210 06:26:54.947542   34393 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22089-8667/.minikube/machines/multinode-140746-m02/id_rsa Username:docker}
	I1210 06:26:55.027094   34393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:26:55.042995   34393 status.go:176] multinode-140746-m02 status: &{Name:multinode-140746-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:26:55.043044   34393 status.go:174] checking status of multinode-140746-m03 ...
	I1210 06:26:55.044829   34393 status.go:371] multinode-140746-m03 host status = "Stopped" (err=<nil>)
	I1210 06:26:55.044849   34393 status.go:384] host is not running, skipping remaining checks
	I1210 06:26:55.044856   34393 status.go:176] multinode-140746-m03 status: &{Name:multinode-140746-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-140746 node start m03 -v=5 --alsologtostderr: (40.482305602s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (289.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-140746
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-140746
E1210 06:29:32.051340   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-140746: (2m44.776391059s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-140746 --wait=true -v=5 --alsologtostderr
E1210 06:30:44.980039   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:31:44.185388   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-140746 --wait=true -v=5 --alsologtostderr: (2m4.378732646s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-140746
--- PASS: TestMultiNode/serial/RestartKeepsNodes (289.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-140746 node delete m03: (2.086907879s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (171.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 stop
E1210 06:33:48.051691   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:34:32.052222   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-140746 stop: (2m51.262651666s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-140746 status: exit status 7 (60.575002ms)

                                                
                                                
-- stdout --
	multinode-140746
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-140746-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-140746 status --alsologtostderr: exit status 7 (62.431192ms)

                                                
                                                
-- stdout --
	multinode-140746
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-140746-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:35:19.226452   36761 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:35:19.226752   36761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:35:19.226765   36761 out.go:374] Setting ErrFile to fd 2...
	I1210 06:35:19.226769   36761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:35:19.227020   36761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:35:19.227235   36761 out.go:368] Setting JSON to false
	I1210 06:35:19.227262   36761 mustload.go:66] Loading cluster: multinode-140746
	I1210 06:35:19.227411   36761 notify.go:221] Checking for updates...
	I1210 06:35:19.227754   36761 config.go:182] Loaded profile config "multinode-140746": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:35:19.227770   36761 status.go:174] checking status of multinode-140746 ...
	I1210 06:35:19.229739   36761 status.go:371] multinode-140746 host status = "Stopped" (err=<nil>)
	I1210 06:35:19.229754   36761 status.go:384] host is not running, skipping remaining checks
	I1210 06:35:19.229759   36761 status.go:176] multinode-140746 status: &{Name:multinode-140746 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:35:19.229790   36761 status.go:174] checking status of multinode-140746-m02 ...
	I1210 06:35:19.231317   36761 status.go:371] multinode-140746-m02 host status = "Stopped" (err=<nil>)
	I1210 06:35:19.231334   36761 status.go:384] host is not running, skipping remaining checks
	I1210 06:35:19.231339   36761 status.go:176] multinode-140746-m02 status: &{Name:multinode-140746-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (171.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-140746 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1210 06:35:44.979390   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-140746 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m22.703629993s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140746 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.16s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-140746
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-140746-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-140746-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (78.535813ms)

                                                
                                                
-- stdout --
	* [multinode-140746-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-140746-m02' is duplicated with machine name 'multinode-140746-m02' in profile 'multinode-140746'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-140746-m03 --driver=kvm2  --container-runtime=crio
E1210 06:36:44.186025   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-140746-m03 --driver=kvm2  --container-runtime=crio: (40.10857526s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-140746
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-140746: exit status 80 (208.512825ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-140746 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-140746-m03 already exists in multinode-140746-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-140746-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.32s)

                                                
                                    
x
+
TestScheduledStopUnix (107.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-605059 --memory=3072 --driver=kvm2  --container-runtime=crio
E1210 06:39:32.051060   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-605059 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.559131784s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605059 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:40:00.972119   38979 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:40:00.972237   38979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:40:00.972249   38979 out.go:374] Setting ErrFile to fd 2...
	I1210 06:40:00.972256   38979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:40:00.972483   38979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:40:00.972716   38979 out.go:368] Setting JSON to false
	I1210 06:40:00.972799   38979 mustload.go:66] Loading cluster: scheduled-stop-605059
	I1210 06:40:00.973092   38979 config.go:182] Loaded profile config "scheduled-stop-605059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:40:00.973156   38979 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/config.json ...
	I1210 06:40:00.973328   38979 mustload.go:66] Loading cluster: scheduled-stop-605059
	I1210 06:40:00.973451   38979 config.go:182] Loaded profile config "scheduled-stop-605059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-605059 -n scheduled-stop-605059
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605059 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:40:01.257998   39025 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:40:01.258091   39025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:40:01.258102   39025 out.go:374] Setting ErrFile to fd 2...
	I1210 06:40:01.258108   39025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:40:01.258299   39025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:40:01.258551   39025 out.go:368] Setting JSON to false
	I1210 06:40:01.258745   39025 daemonize_unix.go:73] killing process 39014 as it is an old scheduled stop
	I1210 06:40:01.258846   39025 mustload.go:66] Loading cluster: scheduled-stop-605059
	I1210 06:40:01.259153   39025 config.go:182] Loaded profile config "scheduled-stop-605059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:40:01.259217   39025 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/config.json ...
	I1210 06:40:01.259415   39025 mustload.go:66] Loading cluster: scheduled-stop-605059
	I1210 06:40:01.259522   39025 config.go:182] Loaded profile config "scheduled-stop-605059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1210 06:40:01.264590   12588 retry.go:31] will retry after 50.106µs: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.265761   12588 retry.go:31] will retry after 104.156µs: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.266945   12588 retry.go:31] will retry after 328.173µs: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.268076   12588 retry.go:31] will retry after 328.342µs: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.269208   12588 retry.go:31] will retry after 303.84µs: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.270331   12588 retry.go:31] will retry after 984.727µs: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.271542   12588 retry.go:31] will retry after 675.379µs: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.272671   12588 retry.go:31] will retry after 990.645µs: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.273822   12588 retry.go:31] will retry after 3.565648ms: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.278029   12588 retry.go:31] will retry after 4.544694ms: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.283220   12588 retry.go:31] will retry after 6.602106ms: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.290549   12588 retry.go:31] will retry after 11.692217ms: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.302801   12588 retry.go:31] will retry after 16.154793ms: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.320040   12588 retry.go:31] will retry after 29.050939ms: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
I1210 06:40:01.349233   12588 retry.go:31] will retry after 35.02431ms: open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605059 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-605059 -n scheduled-stop-605059
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-605059
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605059 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:40:26.939007   39174 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:40:26.939100   39174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:40:26.939104   39174 out.go:374] Setting ErrFile to fd 2...
	I1210 06:40:26.939108   39174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:40:26.939309   39174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:40:26.939562   39174 out.go:368] Setting JSON to false
	I1210 06:40:26.939633   39174 mustload.go:66] Loading cluster: scheduled-stop-605059
	I1210 06:40:26.939956   39174 config.go:182] Loaded profile config "scheduled-stop-605059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:40:26.940016   39174 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/scheduled-stop-605059/config.json ...
	I1210 06:40:26.940204   39174 mustload.go:66] Loading cluster: scheduled-stop-605059
	I1210 06:40:26.940292   39174 config.go:182] Loaded profile config "scheduled-stop-605059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1210 06:40:44.980140   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-605059
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-605059: exit status 7 (59.672913ms)

                                                
                                                
-- stdout --
	scheduled-stop-605059
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-605059 -n scheduled-stop-605059
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-605059 -n scheduled-stop-605059: exit status 7 (60.059913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-605059" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-605059
--- PASS: TestScheduledStopUnix (107.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (369.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1783130972 start -p running-upgrade-069032 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1783130972 start -p running-upgrade-069032 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m8.489814977s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-069032 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1210 06:44:32.045960   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-069032 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m56.909977027s)
helpers_test.go:176: Cleaning up "running-upgrade-069032" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-069032
--- PASS: TestRunningBinaryUpgrade (369.66s)

                                                
                                    
x
+
TestPause/serial/Start (74.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-824458 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-824458 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m14.417633473s)
--- PASS: TestPause/serial/Start (74.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-894399 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-894399 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (96.677783ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-894399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (80.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-894399 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-894399 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m20.11953961s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-894399 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (80.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-579150 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-579150 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.65807ms)

                                                
                                                
-- stdout --
	* [false-579150] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:41:15.945986   40256 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:41:15.946254   40256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:41:15.946265   40256 out.go:374] Setting ErrFile to fd 2...
	I1210 06:41:15.946269   40256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:41:15.946521   40256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8667/.minikube/bin
	I1210 06:41:15.947039   40256 out.go:368] Setting JSON to false
	I1210 06:41:15.947909   40256 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5020,"bootTime":1765343856,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:41:15.947957   40256 start.go:143] virtualization: kvm guest
	I1210 06:41:15.950032   40256 out.go:179] * [false-579150] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:41:15.951430   40256 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:41:15.951453   40256 notify.go:221] Checking for updates...
	I1210 06:41:15.953947   40256 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:41:15.955230   40256 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8667/kubeconfig
	I1210 06:41:15.956540   40256 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8667/.minikube
	I1210 06:41:15.957940   40256 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:41:15.959113   40256 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:41:15.960562   40256 config.go:182] Loaded profile config "NoKubernetes-894399": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:41:15.960657   40256 config.go:182] Loaded profile config "offline-crio-832745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:41:15.960721   40256 config.go:182] Loaded profile config "pause-824458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:41:15.960816   40256 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:41:15.996130   40256 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 06:41:15.997235   40256 start.go:309] selected driver: kvm2
	I1210 06:41:15.997247   40256 start.go:927] validating driver "kvm2" against <nil>
	I1210 06:41:15.997257   40256 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:41:15.998939   40256 out.go:203] 
	W1210 06:41:15.999902   40256 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1210 06:41:16.001147   40256 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-579150 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-579150" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-579150

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579150"

                                                
                                                
----------------------- debugLogs end: false-579150 [took: 3.175804653s] --------------------------------
helpers_test.go:176: Cleaning up "false-579150" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-579150
--- PASS: TestNetworkPlugins/group/false (3.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-894399 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-894399 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (16.812579353s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-894399 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-894399 status -o json: exit status 2 (206.723572ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-894399","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-894399
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-894399 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-894399 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (19.133617711s)
--- PASS: TestNoKubernetes/serial/Start (19.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22089-8667/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-894399 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-894399 "sudo systemctl is-active --quiet service kubelet": exit status 1 (171.404137ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-894399
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-894399: (1.423184427s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (36.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-894399 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-894399 --driver=kvm2  --container-runtime=crio: (36.403093467s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (36.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-894399 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-894399 "sudo systemctl is-active --quiet service kubelet": exit status 1 (173.749793ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1862075399 start -p stopped-upgrade-628038 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1862075399 start -p stopped-upgrade-628038 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m14.271218922s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1862075399 -p stopped-upgrade-628038 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1862075399 -p stopped-upgrade-628038 stop: (1.824674667s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-628038 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-628038 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (31.254323682s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-628038
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestISOImage/Setup (19.66s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-747858 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-747858 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.65783769s)
--- PASS: TestISOImage/Setup (19.66s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.28s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.28s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.15s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.15s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.24s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.24s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1210 06:46:44.185454   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m35.03756733s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.062499373s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-579150 "pgrep -a kubelet"
I1210 06:48:18.482845   12588 config.go:182] Loaded profile config "auto-579150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-579150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cd5wz" [bfe9cb39-65d5-426b-a429-9a3058972858] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cd5wz" [bfe9cb39-65d5-426b-a429-9a3058972858] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00512941s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-579150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-579150 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
I1210 06:48:28.911391   12588 config.go:182] Loaded profile config "custom-flannel-579150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-579150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wbxgf" [269b3c8e-0d8f-4ac5-8fcc-f653481df691] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wbxgf" [269b3c8e-0d8f-4ac5-8fcc-f653481df691] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005713027s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-579150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m31.982507155s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m20.001627316s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1210 06:49:32.045620   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m30.722932336s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-jhzlv" [635ccd29-deb8-45f7-a846-55b3cdceab57] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004944946s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-8dfm6" [dfc7c57c-073d-4b63-8ea7-7b9e97630547] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00574295s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-579150 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-579150 "pgrep -a kubelet"
I1210 06:50:21.102166   12588 config.go:182] Loaded profile config "flannel-579150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-579150 replace --force -f testdata/netcat-deployment.yaml
I1210 06:50:21.153663   12588 config.go:182] Loaded profile config "kindnet-579150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ttqnb" [bc327f44-8b9e-4d0f-883b-c6d019d6bb41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ttqnb" [bc327f44-8b9e-4d0f-883b-c6d019d6bb41] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005565631s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-579150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ghqzl" [4233b1d8-23aa-475c-baf5-c176b840f833] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ghqzl" [4233b1d8-23aa-475c-baf5-c176b840f833] Running
E1210 06:50:28.053136   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004574941s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-579150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-579150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m24.630863978s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-579150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m34.419339957s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-579150 "pgrep -a kubelet"
I1210 06:50:56.370533   12588 config.go:182] Loaded profile config "enable-default-cni-579150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-579150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8n2wf" [6910b31e-9cc1-4f50-bbad-cc74e58deee1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8n2wf" [6910b31e-9cc1-4f50-bbad-cc74e58deee1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004262527s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-579150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-816809 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1210 06:51:44.185084   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-816809 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.875287678s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-579150 "pgrep -a kubelet"
I1210 06:52:11.632870   12588 config.go:182] Loaded profile config "bridge-579150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-579150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-p8nx4" [dd00416e-816e-4fff-baef-baf2d66f560a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-p8nx4" [dd00416e-816e-4fff-baef-baf2d66f560a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004918189s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-kh5kr" [ad25b840-0a21-4e52-b3c2-6c6c39987f48] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-kh5kr" [ad25b840-0a21-4e52-b3c2-6c6c39987f48] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005749656s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-579150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-816809 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [186f20f6-c2a7-4fc9-9440-f1a21d70b5e0] Pending
helpers_test.go:353: "busybox" [186f20f6-c2a7-4fc9-9440-f1a21d70b5e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [186f20f6-c2a7-4fc9-9440-f1a21d70b5e0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004909335s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-816809 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-579150 "pgrep -a kubelet"
I1210 06:52:29.203574   12588 config.go:182] Loaded profile config "calico-579150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-579150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-g42lv" [de5ed4ea-0ff2-44b5-bb63-660b9bd3f11a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-g42lv" [de5ed4ea-0ff2-44b5-bb63-660b9bd3f11a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00524592s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-816809 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-816809 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.147279218s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-816809 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (87.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-816809 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-816809 --alsologtostderr -v=3: (1m27.090073368s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (87.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (94.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-314849 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-314849 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m34.853654634s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (94.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-579150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-579150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-021955 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1210 06:53:18.688111   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:18.694563   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:18.705980   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:18.727502   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:18.768980   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:18.850837   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:19.012547   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:19.334483   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:19.976190   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:21.257831   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:23.819952   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:28.941987   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:29.166626   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:29.173014   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:29.184406   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:29.205751   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:29.247191   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:29.328864   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:29.490413   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:29.811961   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:30.453478   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:31.735503   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:34.297028   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:39.184086   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:39.419209   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:49.661077   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:59.665737   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-021955 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m27.385672024s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-816809 -n old-k8s-version-816809
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-816809 -n old-k8s-version-816809: exit status 7 (60.191319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-816809 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-816809 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1210 06:54:10.142660   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-816809 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (43.037378279s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-816809 -n old-k8s-version-816809
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-314849 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c2ef223c-47ce-42ee-935e-16d5196a7c60] Pending
helpers_test.go:353: "busybox" [c2ef223c-47ce-42ee-935e-16d5196a7c60] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1210 06:54:15.117999   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [c2ef223c-47ce-42ee-935e-16d5196a7c60] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.006128273s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-314849 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-021955 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dffe1acb-1c9f-47fd-8eb7-bcce2decfbe2] Pending
helpers_test.go:353: "busybox" [dffe1acb-1c9f-47fd-8eb7-bcce2decfbe2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dffe1acb-1c9f-47fd-8eb7-bcce2decfbe2] Running
E1210 06:54:32.045683   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-323414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.004457178s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-021955 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-314849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-314849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.031108306s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-314849 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-314849 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-314849 --alsologtostderr -v=3: (1m30.191115473s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-021955 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-021955 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032344889s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-021955 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (73.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-021955 --alsologtostderr -v=3
E1210 06:54:40.627557   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-021955 --alsologtostderr -v=3: (1m13.983374258s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (73.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-54qqj" [6658d342-d201-482c-8d37-3ff58f7e7317] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1210 06:54:51.104129   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-54qqj" [6658d342-d201-482c-8d37-3ff58f7e7317] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.004621525s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-54qqj" [6658d342-d201-482c-8d37-3ff58f7e7317] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003809738s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-816809 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-816809 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-816809 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-816809 -n old-k8s-version-816809
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-816809 -n old-k8s-version-816809: exit status 2 (218.206346ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-816809 -n old-k8s-version-816809
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-816809 -n old-k8s-version-816809: exit status 2 (218.174341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-816809 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-816809 -n old-k8s-version-816809
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-816809 -n old-k8s-version-816809
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-289565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1210 06:55:14.997773   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:15.039534   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:15.061876   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:15.120815   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:15.223492   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:15.283109   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:15.545430   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:15.605446   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:16.187449   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:16.246989   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:17.469769   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:17.529064   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:20.031817   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:20.091386   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:25.153176   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:25.213660   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:35.394802   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:35.455397   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:44.979830   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/functional-736676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-289565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m19.019016962s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021955 -n embed-certs-021955
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021955 -n embed-certs-021955: exit status 7 (62.46043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-021955 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-021955 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-021955 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (45.695526474s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021955 -n embed-certs-021955
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-314849 -n no-preload-314849
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-314849 -n no-preload-314849: exit status 7 (67.56895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-314849 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (66.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-314849 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1210 06:55:55.876414   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:55.936998   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:56.645529   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:56.651974   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:56.663446   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:56.684967   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:56.726444   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:56.808091   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:56.970414   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:57.292141   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:57.934252   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:59.215668   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:01.777874   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:02.549441   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/auto-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:06.899859   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:13.026256   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:17.141199   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-314849 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m6.602807188s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-314849 -n no-preload-314849
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (66.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-289565 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c23debd6-4107-4e5a-8e2b-17a3de4fed5d] Pending
helpers_test.go:353: "busybox" [c23debd6-4107-4e5a-8e2b-17a3de4fed5d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c23debd6-4107-4e5a-8e2b-17a3de4fed5d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005820565s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-289565 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-hpmrk" [03a55fc5-d3bd-43f7-93dc-7840e8503e16] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1210 06:56:36.838728   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:36.899237   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-hpmrk" [03a55fc5-d3bd-43f7-93dc-7840e8503e16] Running
E1210 06:56:37.623432   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/enable-default-cni-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.003452147s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-hpmrk" [03a55fc5-d3bd-43f7-93dc-7840e8503e16] Running
E1210 06:56:44.185407   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003764888s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-021955 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-289565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-289565 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (86.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-289565 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-289565 --alsologtostderr -v=3: (1m26.233873167s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (86.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-021955 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-021955 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-021955 -n embed-certs-021955
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-021955 -n embed-certs-021955: exit status 2 (224.003787ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-021955 -n embed-certs-021955
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-021955 -n embed-certs-021955: exit status 2 (221.713029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-021955 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-021955 -n embed-certs-021955
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-021955 -n embed-certs-021955
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-634960 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-634960 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (42.489768911s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-grz9q" [f29ed5d9-34ea-4087-83a4-0a46c29fe3dd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-grz9q" [f29ed5d9-34ea-4087-83a4-0a46c29fe3dd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004171738s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-grz9q" [f29ed5d9-34ea-4087-83a4-0a46c29fe3dd] Running
E1210 06:57:12.200861   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:12.207337   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:12.218909   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:12.241179   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:12.282644   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:12.364308   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:12.525891   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:12.847825   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:13.489398   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:14.771405   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004793377s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-314849 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-314849 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-314849 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-314849 -n no-preload-314849
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-314849 -n no-preload-314849: exit status 2 (248.23612ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-314849 -n no-preload-314849
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-314849 -n no-preload-314849: exit status 2 (241.044707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-314849 --alsologtostderr -v=1
E1210 06:57:17.333078   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-314849 -n no-preload-314849
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-314849 -n no-preload-314849
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "df -t ext4 /data | grep /data"
E1210 06:57:22.455174   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
E1210 06:57:23.093487   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:23.174972   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
E1210 06:57:23.336520   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
E1210 06:57:23.010926   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:23.017343   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:23.028817   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:23.051170   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.18s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "cat /version.json"
E1210 06:57:23.657894   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   commit: 0d7c1d9864cc7aa82e32494e32331ce8be405026
iso_test.go:118:   iso_version: v1.37.0-1765151505-21409
iso_test.go:118:   kicbase_version: v0.0.48-1764843390-22032
iso_test.go:118:   minikube_version: v1.37.0
--- PASS: TestISOImage/VersionJSON (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-747858 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.17s)
E1210 06:57:24.837624   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:24.844136   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:24.855629   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:24.877170   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:24.918900   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:25.000775   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:25.162322   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:25.484095   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:25.582529   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:26.125393   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:27.407585   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:28.144178   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:29.969346   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:32.697131   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:33.266465   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-634960 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1210 06:57:35.091660   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-634960 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-634960 --alsologtostderr -v=3: (7.04577636s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-634960 -n newest-cni-634960
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-634960 -n newest-cni-634960: exit status 7 (59.482568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-634960 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-634960 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1210 06:57:43.508440   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:45.333468   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:53.179515   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/bridge-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:58.760573   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:58.821079   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:58:03.990111   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/calico-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:58:05.815133   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/old-k8s-version-816809/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:58:07.267137   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/addons-873698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-634960 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (31.266973144s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-634960 -n newest-cni-634960
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-289565 -n default-k8s-diff-port-289565
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-289565 -n default-k8s-diff-port-289565: exit status 7 (65.836007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-289565 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (42.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-289565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-289565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (42.656622208s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-289565 -n default-k8s-diff-port-289565
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (42.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-634960 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-634960 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-634960 -n newest-cni-634960
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-634960 -n newest-cni-634960: exit status 2 (216.575776ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-634960 -n newest-cni-634960
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-634960 -n newest-cni-634960: exit status 2 (221.311579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-634960 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-634960 -n newest-cni-634960
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-634960 -n newest-cni-634960
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-4nldx" [dc73e49d-c474-4a25-9a3d-f4c72a6a602e] Running
E1210 06:58:56.868275   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/custom-flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003768362s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-4nldx" [dc73e49d-c474-4a25-9a3d-f4c72a6a602e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004004298s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-289565 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-289565 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-289565 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-289565 -n default-k8s-diff-port-289565
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-289565 -n default-k8s-diff-port-289565: exit status 2 (211.032796ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-289565 -n default-k8s-diff-port-289565
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-289565 -n default-k8s-diff-port-289565: exit status 2 (207.185196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-289565 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-289565 -n default-k8s-diff-port-289565
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-289565 -n default-k8s-diff-port-289565
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.40s)
E1210 06:59:12.918742   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:12.925162   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:12.936613   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:12.958077   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:12.999546   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:13.081301   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:13.242900   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:13.564689   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:14.207036   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:15.488774   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:18.050867   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/no-preload-314849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.31
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
147 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
361 TestNetworkPlugins/group/kubenet 3.38
370 TestNetworkPlugins/group/cilium 3.72
388 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-873698 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-579150 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-579150" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-579150

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579150"

                                                
                                                
----------------------- debugLogs end: kubenet-579150 [took: 3.223388335s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-579150" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-579150
--- SKIP: TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-579150 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-579150" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-579150

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-579150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579150"

                                                
                                                
----------------------- debugLogs end: cilium-579150 [took: 3.548673855s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-579150" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-579150
--- SKIP: TestNetworkPlugins/group/cilium (3.72s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-900822" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-900822
E1210 06:55:14.899696   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:14.906046   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:14.917445   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:14.938857   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:14.958250   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:14.964650   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:14.976062   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/kindnet-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:14.980620   12588 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8667/.minikube/profiles/flannel-579150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard