Test Report: KVM_Linux_containerd 20327

                    
                      42aa66410b215ebc171a5bcfa49a23d455b53987:2025-01-27:38094
                    
                

Test fail (3/316)

Order failed test Duration
358 TestStartStop/group/embed-certs/serial/SecondStart 1633.06
360 TestStartStop/group/no-preload/serial/SecondStart 1589.18
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 1590.94
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1633.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-635679 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 14:13:24.636322 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:34.232014 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-635679 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (27m11.119626185s)

                                                
                                                
-- stdout --
	* [embed-certs-635679] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-635679" primary control-plane node in "embed-certs-635679" cluster
	* Restarting existing kvm2 VM for "embed-certs-635679" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-635679 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:13:21.155797 1860210 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:13:21.155930 1860210 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:13:21.155943 1860210 out.go:358] Setting ErrFile to fd 2...
	I0127 14:13:21.155949 1860210 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:13:21.156129 1860210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 14:13:21.156671 1860210 out.go:352] Setting JSON to false
	I0127 14:13:21.157747 1860210 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":39342,"bootTime":1737947859,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:13:21.157863 1860210 start.go:139] virtualization: kvm guest
	I0127 14:13:21.160045 1860210 out.go:177] * [embed-certs-635679] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:13:21.161168 1860210 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:13:21.161170 1860210 notify.go:220] Checking for updates...
	I0127 14:13:21.163620 1860210 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:13:21.164982 1860210 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:13:21.166215 1860210 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 14:13:21.167350 1860210 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:13:21.168478 1860210 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:13:21.169839 1860210 config.go:182] Loaded profile config "embed-certs-635679": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:13:21.170231 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:13:21.170290 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:13:21.185178 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0127 14:13:21.185570 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:13:21.186187 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:13:21.186208 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:13:21.186553 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:13:21.186758 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:13:21.187052 1860210 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:13:21.187370 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:13:21.187420 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:13:21.202267 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40947
	I0127 14:13:21.202785 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:13:21.203261 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:13:21.203283 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:13:21.203584 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:13:21.203776 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:13:21.239051 1860210 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:13:21.240262 1860210 start.go:297] selected driver: kvm2
	I0127 14:13:21.240276 1860210 start.go:901] validating driver "kvm2" against &{Name:embed-certs-635679 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-635679 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:13:21.240388 1860210 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:13:21.241030 1860210 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:21.241112 1860210 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-1798877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:13:21.256194 1860210 install.go:137] /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:13:21.256583 1860210 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:13:21.256621 1860210 cni.go:84] Creating CNI manager for ""
	I0127 14:13:21.256669 1860210 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:13:21.256708 1860210 start.go:340] cluster config:
	{Name:embed-certs-635679 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-635679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:13:21.256817 1860210 iso.go:125] acquiring lock: {Name:mk3326e4e64b9d95edc1453384276c21a2957c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:21.258831 1860210 out.go:177] * Starting "embed-certs-635679" primary control-plane node in "embed-certs-635679" cluster
	I0127 14:13:21.260025 1860210 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:13:21.260062 1860210 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 14:13:21.260069 1860210 cache.go:56] Caching tarball of preloaded images
	I0127 14:13:21.260176 1860210 preload.go:172] Found /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 14:13:21.260187 1860210 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 14:13:21.260319 1860210 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/config.json ...
	I0127 14:13:21.260495 1860210 start.go:360] acquireMachinesLock for embed-certs-635679: {Name:mk6fcac41a7a21b211b65e56994e625852d1a781 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:13:21.260541 1860210 start.go:364] duration metric: took 28.059µs to acquireMachinesLock for "embed-certs-635679"
	I0127 14:13:21.260559 1860210 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:13:21.260569 1860210 fix.go:54] fixHost starting: 
	I0127 14:13:21.260853 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:13:21.260892 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:13:21.274983 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I0127 14:13:21.275451 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:13:21.275954 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:13:21.275977 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:13:21.276307 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:13:21.276503 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:13:21.276660 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
	I0127 14:13:21.278297 1860210 fix.go:112] recreateIfNeeded on embed-certs-635679: state=Stopped err=<nil>
	I0127 14:13:21.278324 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	W0127 14:13:21.278486 1860210 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:13:21.280608 1860210 out.go:177] * Restarting existing kvm2 VM for "embed-certs-635679" ...
	I0127 14:13:21.282118 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Start
	I0127 14:13:21.282296 1860210 main.go:141] libmachine: (embed-certs-635679) starting domain...
	I0127 14:13:21.282314 1860210 main.go:141] libmachine: (embed-certs-635679) ensuring networks are active...
	I0127 14:13:21.283192 1860210 main.go:141] libmachine: (embed-certs-635679) Ensuring network default is active
	I0127 14:13:21.283525 1860210 main.go:141] libmachine: (embed-certs-635679) Ensuring network mk-embed-certs-635679 is active
	I0127 14:13:21.283901 1860210 main.go:141] libmachine: (embed-certs-635679) getting domain XML...
	I0127 14:13:21.284658 1860210 main.go:141] libmachine: (embed-certs-635679) creating domain...
	I0127 14:13:22.486225 1860210 main.go:141] libmachine: (embed-certs-635679) waiting for IP...
	I0127 14:13:22.487188 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:22.487655 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:22.487730 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:22.487644 1860245 retry.go:31] will retry after 224.272713ms: waiting for domain to come up
	I0127 14:13:22.713260 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:22.713864 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:22.713898 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:22.713801 1860245 retry.go:31] will retry after 258.194373ms: waiting for domain to come up
	I0127 14:13:22.973378 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:22.973976 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:22.974011 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:22.973915 1860245 retry.go:31] will retry after 393.696938ms: waiting for domain to come up
	I0127 14:13:23.369588 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:23.370128 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:23.370157 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:23.370080 1860245 retry.go:31] will retry after 521.788404ms: waiting for domain to come up
	I0127 14:13:23.893538 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:23.894120 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:23.894153 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:23.894072 1860245 retry.go:31] will retry after 746.089871ms: waiting for domain to come up
	I0127 14:13:24.641317 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:24.641869 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:24.641896 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:24.641827 1860245 retry.go:31] will retry after 894.333313ms: waiting for domain to come up
	I0127 14:13:25.537589 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:25.538102 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:25.538133 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:25.538046 1860245 retry.go:31] will retry after 974.563517ms: waiting for domain to come up
	I0127 14:13:26.514194 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:26.514729 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:26.514773 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:26.514693 1860245 retry.go:31] will retry after 1.359543608s: waiting for domain to come up
	I0127 14:13:27.876285 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:27.876898 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:27.876932 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:27.876828 1860245 retry.go:31] will retry after 1.168162945s: waiting for domain to come up
	I0127 14:13:29.047085 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:29.047663 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:29.047710 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:29.047643 1860245 retry.go:31] will retry after 2.191940383s: waiting for domain to come up
	I0127 14:13:31.240972 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:31.241466 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:31.241492 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:31.241437 1860245 retry.go:31] will retry after 1.80110911s: waiting for domain to come up
	I0127 14:13:33.044812 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:33.045257 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:33.045288 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:33.045243 1860245 retry.go:31] will retry after 2.233702385s: waiting for domain to come up
	I0127 14:13:35.281578 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:35.282187 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
	I0127 14:13:35.282213 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:35.282118 1860245 retry.go:31] will retry after 3.504793306s: waiting for domain to come up
	I0127 14:13:38.788161 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:38.788602 1860210 main.go:141] libmachine: (embed-certs-635679) found domain IP: 192.168.61.180
	I0127 14:13:38.788627 1860210 main.go:141] libmachine: (embed-certs-635679) reserving static IP address...
	I0127 14:13:38.788642 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has current primary IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:38.789050 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "embed-certs-635679", mac: "52:54:00:84:cf:47", ip: "192.168.61.180"} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:38.789105 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | skip adding static IP to network mk-embed-certs-635679 - found existing host DHCP lease matching {name: "embed-certs-635679", mac: "52:54:00:84:cf:47", ip: "192.168.61.180"}
	I0127 14:13:38.789129 1860210 main.go:141] libmachine: (embed-certs-635679) reserved static IP address 192.168.61.180 for domain embed-certs-635679
	I0127 14:13:38.789153 1860210 main.go:141] libmachine: (embed-certs-635679) waiting for SSH...
	I0127 14:13:38.789176 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Getting to WaitForSSH function...
	I0127 14:13:38.791170 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:38.791460 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:38.791483 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:38.791606 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Using SSH client type: external
	I0127 14:13:38.791654 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa (-rw-------)
	I0127 14:13:38.791695 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:13:38.791712 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | About to run SSH command:
	I0127 14:13:38.791725 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | exit 0
	I0127 14:13:38.915087 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | SSH cmd err, output: <nil>: 
	I0127 14:13:39.454657 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetConfigRaw
	I0127 14:13:39.455493 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetIP
	I0127 14:13:39.458697 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.459119 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:39.459163 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.459408 1860210 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/config.json ...
	I0127 14:13:39.459597 1860210 machine.go:93] provisionDockerMachine start ...
	I0127 14:13:39.459619 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:13:39.459816 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:13:39.463084 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.463500 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:39.463532 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.463700 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:13:39.463873 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:39.464041 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:39.464209 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:13:39.464372 1860210 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:39.464572 1860210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.180 22 <nil> <nil>}
	I0127 14:13:39.464583 1860210 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:13:39.574932 1860210 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 14:13:39.574977 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetMachineName
	I0127 14:13:39.575205 1860210 buildroot.go:166] provisioning hostname "embed-certs-635679"
	I0127 14:13:39.575229 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetMachineName
	I0127 14:13:39.575428 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:13:39.578257 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.578665 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:39.578689 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.578901 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:13:39.579108 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:39.579270 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:39.579419 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:13:39.579576 1860210 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:39.579818 1860210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.180 22 <nil> <nil>}
	I0127 14:13:39.579839 1860210 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-635679 && echo "embed-certs-635679" | sudo tee /etc/hostname
	I0127 14:13:39.700628 1860210 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-635679
	
	I0127 14:13:39.700666 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:13:39.703524 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.704220 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:39.704271 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.704474 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:13:39.704676 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:39.704810 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:39.704914 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:13:39.705085 1860210 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:39.705274 1860210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.180 22 <nil> <nil>}
	I0127 14:13:39.705297 1860210 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-635679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-635679/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-635679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:13:39.828188 1860210 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:13:39.828221 1860210 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-1798877/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-1798877/.minikube}
	I0127 14:13:39.828251 1860210 buildroot.go:174] setting up certificates
	I0127 14:13:39.828269 1860210 provision.go:84] configureAuth start
	I0127 14:13:39.828290 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetMachineName
	I0127 14:13:39.828584 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetIP
	I0127 14:13:39.831539 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.831969 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:39.831999 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.832067 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:13:39.834211 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.834550 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:39.834590 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.834734 1860210 provision.go:143] copyHostCerts
	I0127 14:13:39.834812 1860210 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem, removing ...
	I0127 14:13:39.834830 1860210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem
	I0127 14:13:39.834891 1860210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem (1078 bytes)
	I0127 14:13:39.835038 1860210 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem, removing ...
	I0127 14:13:39.835049 1860210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem
	I0127 14:13:39.835074 1860210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem (1123 bytes)
	I0127 14:13:39.835146 1860210 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem, removing ...
	I0127 14:13:39.835158 1860210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem
	I0127 14:13:39.835180 1860210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem (1675 bytes)
	I0127 14:13:39.835234 1860210 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem org=jenkins.embed-certs-635679 san=[127.0.0.1 192.168.61.180 embed-certs-635679 localhost minikube]
	I0127 14:13:39.923744 1860210 provision.go:177] copyRemoteCerts
	I0127 14:13:39.923816 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:13:39.923848 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:13:39.926360 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.926658 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:39.926687 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:39.926919 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:13:39.927098 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:39.927246 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:13:39.927368 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
	I0127 14:13:40.008198 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:13:40.030294 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 14:13:40.051055 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:13:40.072532 1860210 provision.go:87] duration metric: took 244.24352ms to configureAuth
	I0127 14:13:40.072578 1860210 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:13:40.072788 1860210 config.go:182] Loaded profile config "embed-certs-635679": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:13:40.072804 1860210 machine.go:96] duration metric: took 613.194376ms to provisionDockerMachine
	I0127 14:13:40.072813 1860210 start.go:293] postStartSetup for "embed-certs-635679" (driver="kvm2")
	I0127 14:13:40.072825 1860210 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:13:40.072852 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:13:40.073149 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:13:40.073178 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:13:40.075877 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:40.076210 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:40.076301 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:40.076446 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:13:40.076649 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:40.076842 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:13:40.076978 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
	I0127 14:13:40.156185 1860210 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:13:40.160264 1860210 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:13:40.160295 1860210 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/addons for local assets ...
	I0127 14:13:40.160368 1860210 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/files for local assets ...
	I0127 14:13:40.160463 1860210 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem -> 18060702.pem in /etc/ssl/certs
	I0127 14:13:40.160580 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:13:40.168956 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:13:40.190965 1860210 start.go:296] duration metric: took 118.133051ms for postStartSetup
	I0127 14:13:40.191014 1860210 fix.go:56] duration metric: took 18.93044406s for fixHost
	I0127 14:13:40.191043 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:13:40.193676 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:40.194047 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:40.194077 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:40.194205 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:13:40.194406 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:40.194535 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:40.194667 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:13:40.194824 1860210 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:40.195027 1860210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.180 22 <nil> <nil>}
	I0127 14:13:40.195040 1860210 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:13:40.299552 1860210 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987220.275198748
	
	I0127 14:13:40.299576 1860210 fix.go:216] guest clock: 1737987220.275198748
	I0127 14:13:40.299583 1860210 fix.go:229] Guest: 2025-01-27 14:13:40.275198748 +0000 UTC Remote: 2025-01-27 14:13:40.191018899 +0000 UTC m=+19.075426547 (delta=84.179849ms)
	I0127 14:13:40.299608 1860210 fix.go:200] guest clock delta is within tolerance: 84.179849ms
	I0127 14:13:40.299615 1860210 start.go:83] releasing machines lock for "embed-certs-635679", held for 19.039062058s
	I0127 14:13:40.299676 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:13:40.299993 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetIP
	I0127 14:13:40.302964 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:40.303339 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:40.303373 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:40.303518 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:13:40.304033 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:13:40.304226 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:13:40.304347 1860210 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:13:40.304392 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:13:40.304399 1860210 ssh_runner.go:195] Run: cat /version.json
	I0127 14:13:40.304437 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:13:40.307285 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:40.307612 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:40.307688 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:40.307709 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:40.307894 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:13:40.308042 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:40.308069 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:40.308109 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:40.308314 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:13:40.308322 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:13:40.308479 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:13:40.308670 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:13:40.308719 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
	I0127 14:13:40.308823 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
	I0127 14:13:40.416920 1860210 ssh_runner.go:195] Run: systemctl --version
	I0127 14:13:40.422621 1860210 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:13:40.427810 1860210 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:13:40.427863 1860210 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:13:40.442459 1860210 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:13:40.442486 1860210 start.go:495] detecting cgroup driver to use...
	I0127 14:13:40.442564 1860210 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 14:13:40.472735 1860210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 14:13:40.487526 1860210 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:13:40.487581 1860210 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:13:40.500662 1860210 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:13:40.514200 1860210 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:13:40.637821 1860210 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:13:40.782905 1860210 docker.go:233] disabling docker service ...
	I0127 14:13:40.782978 1860210 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:13:40.796697 1860210 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:13:40.808719 1860210 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:13:40.941152 1860210 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:13:41.056187 1860210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:13:41.069051 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:13:41.085641 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 14:13:41.094778 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 14:13:41.105068 1860210 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 14:13:41.105126 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 14:13:41.118970 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:13:41.129142 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 14:13:41.139297 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:13:41.148963 1860210 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:13:41.159097 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 14:13:41.168571 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 14:13:41.178272 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 14:13:41.187611 1860210 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:13:41.196779 1860210 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:13:41.196835 1860210 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:13:41.209411 1860210 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:13:41.217986 1860210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:13:41.331662 1860210 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 14:13:41.359894 1860210 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 14:13:41.359985 1860210 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:13:41.363948 1860210 retry.go:31] will retry after 579.51809ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 14:13:41.943710 1860210 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:13:41.948775 1860210 start.go:563] Will wait 60s for crictl version
	I0127 14:13:41.948834 1860210 ssh_runner.go:195] Run: which crictl
	I0127 14:13:41.952580 1860210 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:13:41.989078 1860210 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 14:13:41.989184 1860210 ssh_runner.go:195] Run: containerd --version
	I0127 14:13:42.014553 1860210 ssh_runner.go:195] Run: containerd --version
	I0127 14:13:42.039584 1860210 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 14:13:42.040834 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetIP
	I0127 14:13:42.044160 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:42.044561 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:13:42.044593 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:13:42.044836 1860210 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 14:13:42.048820 1860210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:13:42.061014 1860210 kubeadm.go:883] updating cluster {Name:embed-certs-635679 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-635679 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:13:42.061136 1860210 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:13:42.061189 1860210 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:13:42.104465 1860210 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:13:42.104489 1860210 containerd.go:534] Images already preloaded, skipping extraction
	I0127 14:13:42.104539 1860210 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:13:42.140076 1860210 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:13:42.140103 1860210 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:13:42.140117 1860210 kubeadm.go:934] updating node { 192.168.61.180 8443 v1.32.1 containerd true true} ...
	I0127 14:13:42.140295 1860210 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-635679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-635679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:13:42.140367 1860210 ssh_runner.go:195] Run: sudo crictl info
	I0127 14:13:42.173422 1860210 cni.go:84] Creating CNI manager for ""
	I0127 14:13:42.173454 1860210 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:13:42.173470 1860210 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:13:42.173502 1860210 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.180 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-635679 NodeName:embed-certs-635679 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:13:42.173687 1860210 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-635679"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.180"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.180"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:13:42.173767 1860210 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:13:42.184900 1860210 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:13:42.184991 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:13:42.194622 1860210 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0127 14:13:42.210525 1860210 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:13:42.226019 1860210 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2314 bytes)
	I0127 14:13:42.241933 1860210 ssh_runner.go:195] Run: grep 192.168.61.180	control-plane.minikube.internal$ /etc/hosts
	I0127 14:13:42.245391 1860210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:13:42.256498 1860210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:13:42.375107 1860210 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:13:42.397661 1860210 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679 for IP: 192.168.61.180
	I0127 14:13:42.397701 1860210 certs.go:194] generating shared ca certs ...
	I0127 14:13:42.397747 1860210 certs.go:226] acquiring lock for ca certs: {Name:mkc6b95fb3d2c0d0c7049cde446028a0d731f231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:13:42.397956 1860210 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key
	I0127 14:13:42.398069 1860210 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key
	I0127 14:13:42.398092 1860210 certs.go:256] generating profile certs ...
	I0127 14:13:42.398253 1860210 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/client.key
	I0127 14:13:42.398340 1860210 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/apiserver.key.c3222ec9
	I0127 14:13:42.398404 1860210 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/proxy-client.key
	I0127 14:13:42.398585 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem (1338 bytes)
	W0127 14:13:42.398626 1860210 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070_empty.pem, impossibly tiny 0 bytes
	I0127 14:13:42.398640 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:13:42.398671 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:13:42.398704 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:13:42.398735 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem (1675 bytes)
	I0127 14:13:42.398828 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:13:42.399837 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:13:42.433852 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:13:42.458311 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:13:42.481339 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 14:13:42.508328 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 14:13:42.540137 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 14:13:42.568660 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:13:42.591132 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:13:42.616298 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /usr/share/ca-certificates/18060702.pem (1708 bytes)
	I0127 14:13:42.641456 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:13:42.667039 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem --> /usr/share/ca-certificates/1806070.pem (1338 bytes)
	I0127 14:13:42.690033 1860210 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:13:42.707437 1860210 ssh_runner.go:195] Run: openssl version
	I0127 14:13:42.713417 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18060702.pem && ln -fs /usr/share/ca-certificates/18060702.pem /etc/ssl/certs/18060702.pem"
	I0127 14:13:42.724271 1860210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18060702.pem
	I0127 14:13:42.728246 1860210 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:10 /usr/share/ca-certificates/18060702.pem
	I0127 14:13:42.728300 1860210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18060702.pem
	I0127 14:13:42.734063 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18060702.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:13:42.744802 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:13:42.755448 1860210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:13:42.761015 1860210 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:13:42.761067 1860210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:13:42.768368 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:13:42.778726 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806070.pem && ln -fs /usr/share/ca-certificates/1806070.pem /etc/ssl/certs/1806070.pem"
	I0127 14:13:42.788563 1860210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806070.pem
	I0127 14:13:42.792702 1860210 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:10 /usr/share/ca-certificates/1806070.pem
	I0127 14:13:42.792758 1860210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806070.pem
	I0127 14:13:42.798170 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806070.pem /etc/ssl/certs/51391683.0"
	I0127 14:13:42.807686 1860210 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:13:42.811838 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:13:42.817410 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:13:42.822851 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:13:42.828321 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:13:42.833665 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:13:42.839354 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:13:42.844998 1860210 kubeadm.go:392] StartCluster: {Name:embed-certs-635679 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-635679 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:13:42.845087 1860210 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 14:13:42.845151 1860210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:13:42.884238 1860210 cri.go:89] found id: "32f7119679a11962824e58e5b6f1deebde8552cd40a9aef87743a355e2c311e3"
	I0127 14:13:42.884264 1860210 cri.go:89] found id: "4edb9bf088a7d0b36405d0de8e3c5989acb34e7042d9021c696e3321f488b9a3"
	I0127 14:13:42.884269 1860210 cri.go:89] found id: "2af6cfde618af2b9e79131c949655785d535f681dd62ce0209b62e62574e16b0"
	I0127 14:13:42.884272 1860210 cri.go:89] found id: "c0e7fafaca98c2b01e296a632a5b08e714c3f8b71473add8914de864dc58a256"
	I0127 14:13:42.884275 1860210 cri.go:89] found id: "57bb5c43279ffb0ccc598e65301ecf90861c8a88230919c92f86bbc8b9990027"
	I0127 14:13:42.884279 1860210 cri.go:89] found id: "fb1d1cc0a1ab37ae8dfdef1aaec53e04c544a0bdff9efcaf81b043bad63cac34"
	I0127 14:13:42.884283 1860210 cri.go:89] found id: "1536cb7c9e5e69c77ec5eaffae1da0a1a546fbef499fa7e963764811204997d3"
	I0127 14:13:42.884287 1860210 cri.go:89] found id: ""
	I0127 14:13:42.884361 1860210 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 14:13:42.899419 1860210 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T14:13:42Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 14:13:42.899510 1860210 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:13:42.910122 1860210 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 14:13:42.910145 1860210 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 14:13:42.910195 1860210 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 14:13:42.919020 1860210 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:13:42.919798 1860210 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-635679" does not appear in /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:13:42.920141 1860210 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-1798877/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-635679" cluster setting kubeconfig missing "embed-certs-635679" context setting]
	I0127 14:13:42.920780 1860210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:13:42.922301 1860210 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 14:13:42.931572 1860210 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.180
	I0127 14:13:42.931609 1860210 kubeadm.go:1160] stopping kube-system containers ...
	I0127 14:13:42.931623 1860210 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 14:13:42.931679 1860210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:13:42.973261 1860210 cri.go:89] found id: "32f7119679a11962824e58e5b6f1deebde8552cd40a9aef87743a355e2c311e3"
	I0127 14:13:42.973291 1860210 cri.go:89] found id: "4edb9bf088a7d0b36405d0de8e3c5989acb34e7042d9021c696e3321f488b9a3"
	I0127 14:13:42.973298 1860210 cri.go:89] found id: "2af6cfde618af2b9e79131c949655785d535f681dd62ce0209b62e62574e16b0"
	I0127 14:13:42.973304 1860210 cri.go:89] found id: "c0e7fafaca98c2b01e296a632a5b08e714c3f8b71473add8914de864dc58a256"
	I0127 14:13:42.973308 1860210 cri.go:89] found id: "57bb5c43279ffb0ccc598e65301ecf90861c8a88230919c92f86bbc8b9990027"
	I0127 14:13:42.973313 1860210 cri.go:89] found id: "fb1d1cc0a1ab37ae8dfdef1aaec53e04c544a0bdff9efcaf81b043bad63cac34"
	I0127 14:13:42.973317 1860210 cri.go:89] found id: "1536cb7c9e5e69c77ec5eaffae1da0a1a546fbef499fa7e963764811204997d3"
	I0127 14:13:42.973321 1860210 cri.go:89] found id: ""
	I0127 14:13:42.973327 1860210 cri.go:252] Stopping containers: [32f7119679a11962824e58e5b6f1deebde8552cd40a9aef87743a355e2c311e3 4edb9bf088a7d0b36405d0de8e3c5989acb34e7042d9021c696e3321f488b9a3 2af6cfde618af2b9e79131c949655785d535f681dd62ce0209b62e62574e16b0 c0e7fafaca98c2b01e296a632a5b08e714c3f8b71473add8914de864dc58a256 57bb5c43279ffb0ccc598e65301ecf90861c8a88230919c92f86bbc8b9990027 fb1d1cc0a1ab37ae8dfdef1aaec53e04c544a0bdff9efcaf81b043bad63cac34 1536cb7c9e5e69c77ec5eaffae1da0a1a546fbef499fa7e963764811204997d3]
	I0127 14:13:42.973384 1860210 ssh_runner.go:195] Run: which crictl
	I0127 14:13:42.977408 1860210 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 32f7119679a11962824e58e5b6f1deebde8552cd40a9aef87743a355e2c311e3 4edb9bf088a7d0b36405d0de8e3c5989acb34e7042d9021c696e3321f488b9a3 2af6cfde618af2b9e79131c949655785d535f681dd62ce0209b62e62574e16b0 c0e7fafaca98c2b01e296a632a5b08e714c3f8b71473add8914de864dc58a256 57bb5c43279ffb0ccc598e65301ecf90861c8a88230919c92f86bbc8b9990027 fb1d1cc0a1ab37ae8dfdef1aaec53e04c544a0bdff9efcaf81b043bad63cac34 1536cb7c9e5e69c77ec5eaffae1da0a1a546fbef499fa7e963764811204997d3
	I0127 14:13:43.019472 1860210 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 14:13:43.035447 1860210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:13:43.044399 1860210 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:13:43.044428 1860210 kubeadm.go:157] found existing configuration files:
	
	I0127 14:13:43.044484 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:13:43.052786 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:13:43.052850 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:13:43.061481 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:13:43.070018 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:13:43.070077 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:13:43.079149 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:13:43.087642 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:13:43.087691 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:13:43.096876 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:13:43.105836 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:13:43.105898 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:13:43.114179 1860210 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:13:43.123378 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:43.253719 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:44.859245 1860210 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.605475862s)
	I0127 14:13:44.859296 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:45.067517 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:45.156357 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:45.250349 1860210 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:13:45.250445 1860210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:45.751433 1860210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:46.250930 1860210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:46.269503 1860210 api_server.go:72] duration metric: took 1.019153447s to wait for apiserver process to appear ...
	I0127 14:13:46.269536 1860210 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:13:46.269562 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
	I0127 14:13:46.270172 1860210 api_server.go:269] stopped: https://192.168.61.180:8443/healthz: Get "https://192.168.61.180:8443/healthz": dial tcp 192.168.61.180:8443: connect: connection refused
	I0127 14:13:46.770303 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
	I0127 14:13:48.602517 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:13:48.602550 1860210 api_server.go:103] status: https://192.168.61.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:13:48.602568 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
	I0127 14:13:48.630699 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:13:48.630753 1860210 api_server.go:103] status: https://192.168.61.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:13:48.770158 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
	I0127 14:13:48.776132 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:13:48.776165 1860210 api_server.go:103] status: https://192.168.61.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:13:49.269810 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
	I0127 14:13:49.283288 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:13:49.283333 1860210 api_server.go:103] status: https://192.168.61.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:13:49.770613 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
	I0127 14:13:49.781512 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:13:49.781556 1860210 api_server.go:103] status: https://192.168.61.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:13:50.270274 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
	I0127 14:13:50.276610 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 200:
	ok
	I0127 14:13:50.285654 1860210 api_server.go:141] control plane version: v1.32.1
	I0127 14:13:50.285703 1860210 api_server.go:131] duration metric: took 4.01615716s to wait for apiserver health ...
	I0127 14:13:50.285716 1860210 cni.go:84] Creating CNI manager for ""
	I0127 14:13:50.285725 1860210 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:13:50.287872 1860210 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:13:50.289432 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:13:50.300066 1860210 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:13:50.328085 1860210 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:13:50.341514 1860210 system_pods.go:59] 8 kube-system pods found
	I0127 14:13:50.341585 1860210 system_pods.go:61] "coredns-668d6bf9bc-xx6ks" [ae9e15c0-59d8-4285-b8bb-94b70a9ebc40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:13:50.341603 1860210 system_pods.go:61] "etcd-embed-certs-635679" [927e5a6c-7d19-4555-86eb-d567f3ce4a8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 14:13:50.341617 1860210 system_pods.go:61] "kube-apiserver-embed-certs-635679" [4ca30362-b3d5-47ce-ae6e-6c0c5d8b29e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 14:13:50.341634 1860210 system_pods.go:61] "kube-controller-manager-embed-certs-635679" [af0fa1a5-481a-44d4-9965-f49aeb50d944] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 14:13:50.341644 1860210 system_pods.go:61] "kube-proxy-8cwvc" [66c2e806-d895-43bd-aecf-89e00bc47f2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 14:13:50.341663 1860210 system_pods.go:61] "kube-scheduler-embed-certs-635679" [a3338c56-f565-4a80-84a5-c776e5b932fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 14:13:50.341673 1860210 system_pods.go:61] "metrics-server-f79f97bbb-mt5gf" [682d32cc-fec1-4a59-b209-e0430fdb9aba] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:13:50.341701 1860210 system_pods.go:61] "storage-provisioner" [f1cbcd32-4a98-4100-a973-f4c0e241a76e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 14:13:50.341721 1860210 system_pods.go:74] duration metric: took 13.601769ms to wait for pod list to return data ...
	I0127 14:13:50.341734 1860210 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:13:50.351141 1860210 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:13:50.351180 1860210 node_conditions.go:123] node cpu capacity is 2
	I0127 14:13:50.351196 1860210 node_conditions.go:105] duration metric: took 9.451637ms to run NodePressure ...
	I0127 14:13:50.351221 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:50.638063 1860210 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 14:13:50.644591 1860210 kubeadm.go:739] kubelet initialised
	I0127 14:13:50.644623 1860210 kubeadm.go:740] duration metric: took 6.526455ms waiting for restarted kubelet to initialise ...
	I0127 14:13:50.644635 1860210 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:13:50.649514 1860210 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-xx6ks" in "kube-system" namespace to be "Ready" ...
	I0127 14:13:52.657424 1860210 pod_ready.go:103] pod "coredns-668d6bf9bc-xx6ks" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:55.156449 1860210 pod_ready.go:103] pod "coredns-668d6bf9bc-xx6ks" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:55.657432 1860210 pod_ready.go:93] pod "coredns-668d6bf9bc-xx6ks" in "kube-system" namespace has status "Ready":"True"
	I0127 14:13:55.657455 1860210 pod_ready.go:82] duration metric: took 5.007903814s for pod "coredns-668d6bf9bc-xx6ks" in "kube-system" namespace to be "Ready" ...
	I0127 14:13:55.657465 1860210 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:13:57.663788 1860210 pod_ready.go:93] pod "etcd-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
	I0127 14:13:57.663817 1860210 pod_ready.go:82] duration metric: took 2.006346137s for pod "etcd-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:13:57.663832 1860210 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:13:59.671160 1860210 pod_ready.go:103] pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:02.170505 1860210 pod_ready.go:103] pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:04.171363 1860210 pod_ready.go:103] pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:06.171320 1860210 pod_ready.go:93] pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
	I0127 14:14:06.171344 1860210 pod_ready.go:82] duration metric: took 8.507503047s for pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:06.171355 1860210 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:06.177197 1860210 pod_ready.go:93] pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
	I0127 14:14:06.177216 1860210 pod_ready.go:82] duration metric: took 5.855315ms for pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:06.177225 1860210 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8cwvc" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:06.181880 1860210 pod_ready.go:93] pod "kube-proxy-8cwvc" in "kube-system" namespace has status "Ready":"True"
	I0127 14:14:06.181903 1860210 pod_ready.go:82] duration metric: took 4.66997ms for pod "kube-proxy-8cwvc" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:06.181914 1860210 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:06.186791 1860210 pod_ready.go:93] pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
	I0127 14:14:06.186811 1860210 pod_ready.go:82] duration metric: took 4.890146ms for pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:06.186823 1860210 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:08.195701 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:10.694821 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:13.193623 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:15.693213 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:18.192785 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:20.194464 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:22.693253 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:24.694821 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:26.697694 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:29.193107 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:31.194686 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:33.692966 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:36.192603 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:38.192896 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:40.193587 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:42.195047 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:44.195462 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:46.698937 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:49.193562 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:51.194152 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:53.692637 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:55.693047 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:57.693793 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:00.193264 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:02.193852 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:04.195528 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:06.693845 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:08.968945 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:11.194005 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:13.693461 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:15.693828 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:17.694630 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:20.193765 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:22.194235 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:24.694860 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:26.694919 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:29.195101 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:31.694467 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:34.194385 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:36.693914 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:38.696081 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:41.197272 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:43.695209 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:46.195123 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:48.693952 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:50.694026 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:53.206371 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:55.695985 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:58.195326 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:00.693783 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:03.193681 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:05.693111 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:07.693549 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:10.193399 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:12.193548 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:14.694290 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:17.193461 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:19.693503 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:21.693867 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:23.693979 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:26.193147 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:28.194207 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:30.194278 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:32.195140 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:34.195180 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:36.694560 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:39.193069 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:41.193160 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:43.193961 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:45.194883 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:47.693751 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:50.193423 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:52.693522 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:55.194191 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:57.194327 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:59.692261 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:01.693543 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:04.193846 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:06.194219 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:08.692728 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:10.693640 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:12.694404 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:15.193536 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:17.692747 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:19.693588 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:22.194367 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:24.194842 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:26.693271 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:29.198088 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:31.693543 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:34.195013 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:36.693172 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:38.694082 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:41.192529 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:43.194555 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:45.692542 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:47.695942 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:50.194474 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:52.696314 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:55.193437 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:57.693084 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:00.192946 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:02.193687 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:04.692411 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:06.187223 1860210 pod_ready.go:82] duration metric: took 4m0.000379978s for pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace to be "Ready" ...
	E0127 14:18:06.187264 1860210 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 14:18:06.187307 1860210 pod_ready.go:39] duration metric: took 4m15.542651284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:18:06.187351 1860210 kubeadm.go:597] duration metric: took 4m23.277196896s to restartPrimaryControlPlane
	W0127 14:18:06.187434 1860210 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 14:18:06.187467 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 14:18:07.911632 1860210 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.724132799s)
	I0127 14:18:07.911722 1860210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:18:07.931280 1860210 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:18:07.944298 1860210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:18:07.954011 1860210 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:18:07.954034 1860210 kubeadm.go:157] found existing configuration files:
	
	I0127 14:18:07.954077 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:18:07.963218 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:18:07.963275 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:18:07.973745 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:18:07.982893 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:18:07.982960 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:18:07.992093 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:18:08.001260 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:18:08.001322 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:18:08.012990 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:18:08.021707 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:18:08.021763 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:18:08.031820 1860210 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:18:08.073451 1860210 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:18:08.073535 1860210 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:18:08.185904 1860210 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:18:08.186103 1860210 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:18:08.186246 1860210 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:18:08.192454 1860210 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:18:08.194520 1860210 out.go:235]   - Generating certificates and keys ...
	I0127 14:18:08.194603 1860210 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:18:08.194694 1860210 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:18:08.194839 1860210 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:18:08.194927 1860210 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:18:08.195012 1860210 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:18:08.195078 1860210 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:18:08.195179 1860210 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:18:08.195283 1860210 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:18:08.195394 1860210 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:18:08.196373 1860210 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:18:08.196466 1860210 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:18:08.196542 1860210 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:18:08.321098 1860210 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:18:08.541093 1860210 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:18:08.651159 1860210 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:18:08.826558 1860210 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:18:08.988229 1860210 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:18:08.988652 1860210 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:18:08.991442 1860210 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:18:08.993001 1860210 out.go:235]   - Booting up control plane ...
	I0127 14:18:08.993138 1860210 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:18:08.993209 1860210 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:18:08.994107 1860210 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:18:09.014865 1860210 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:18:09.020651 1860210 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:18:09.020750 1860210 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:18:09.151753 1860210 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:18:09.151884 1860210 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:18:09.653270 1860210 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.611822ms
	I0127 14:18:09.653382 1860210 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:18:17.655587 1860210 kubeadm.go:310] [api-check] The API server is healthy after 8.002072671s
	I0127 14:18:17.668708 1860210 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:18:17.682413 1860210 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:18:17.704713 1860210 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:18:17.704968 1860210 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-635679 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:18:17.713080 1860210 kubeadm.go:310] [bootstrap-token] Using token: hphos4.59px2lq9c4g168m4
	I0127 14:18:17.714344 1860210 out.go:235]   - Configuring RBAC rules ...
	I0127 14:18:17.714512 1860210 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:18:17.721371 1860210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:18:17.727820 1860210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:18:17.731000 1860210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:18:17.733786 1860210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:18:17.736631 1860210 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:18:18.062788 1860210 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:18:18.485209 1860210 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:18:19.063817 1860210 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:18:19.065385 1860210 kubeadm.go:310] 
	I0127 14:18:19.065503 1860210 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:18:19.065516 1860210 kubeadm.go:310] 
	I0127 14:18:19.065665 1860210 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:18:19.065689 1860210 kubeadm.go:310] 
	I0127 14:18:19.065721 1860210 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:18:19.065806 1860210 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:18:19.065900 1860210 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:18:19.065916 1860210 kubeadm.go:310] 
	I0127 14:18:19.065998 1860210 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:18:19.066007 1860210 kubeadm.go:310] 
	I0127 14:18:19.066075 1860210 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:18:19.066089 1860210 kubeadm.go:310] 
	I0127 14:18:19.066154 1860210 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:18:19.066260 1860210 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:18:19.066381 1860210 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:18:19.066401 1860210 kubeadm.go:310] 
	I0127 14:18:19.066518 1860210 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:18:19.066627 1860210 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:18:19.066638 1860210 kubeadm.go:310] 
	I0127 14:18:19.066782 1860210 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hphos4.59px2lq9c4g168m4 \
	I0127 14:18:19.066929 1860210 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e \
	I0127 14:18:19.066973 1860210 kubeadm.go:310] 	--control-plane 
	I0127 14:18:19.066984 1860210 kubeadm.go:310] 
	I0127 14:18:19.067112 1860210 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:18:19.067124 1860210 kubeadm.go:310] 
	I0127 14:18:19.067244 1860210 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hphos4.59px2lq9c4g168m4 \
	I0127 14:18:19.067390 1860210 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e 
	I0127 14:18:19.067997 1860210 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:18:19.068048 1860210 cni.go:84] Creating CNI manager for ""
	I0127 14:18:19.068068 1860210 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:18:19.070005 1860210 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:18:19.071444 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:18:19.083641 1860210 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:18:19.106274 1860210 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:18:19.106345 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:19.106355 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-635679 minikube.k8s.io/updated_at=2025_01_27T14_18_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=embed-certs-635679 minikube.k8s.io/primary=true
	I0127 14:18:19.138908 1860210 ops.go:34] apiserver oom_adj: -16
	I0127 14:18:19.335635 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:19.836673 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:20.336363 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:20.836633 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:21.336621 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:21.835710 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:22.336249 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:22.835985 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:23.335802 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:23.461630 1860210 kubeadm.go:1113] duration metric: took 4.355337127s to wait for elevateKubeSystemPrivileges
	I0127 14:18:23.461686 1860210 kubeadm.go:394] duration metric: took 4m40.616696193s to StartCluster
	I0127 14:18:23.461716 1860210 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:18:23.461811 1860210 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:18:23.463618 1860210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:18:23.464255 1860210 config.go:182] Loaded profile config "embed-certs-635679": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:18:23.464387 1860210 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:18:23.464492 1860210 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-635679"
	I0127 14:18:23.464512 1860210 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-635679"
	W0127 14:18:23.464525 1860210 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:18:23.464561 1860210 host.go:66] Checking if "embed-certs-635679" exists ...
	I0127 14:18:23.464992 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:23.465036 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:23.465118 1860210 addons.go:69] Setting default-storageclass=true in profile "embed-certs-635679"
	I0127 14:18:23.465161 1860210 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-635679"
	I0127 14:18:23.465260 1860210 addons.go:69] Setting dashboard=true in profile "embed-certs-635679"
	I0127 14:18:23.465281 1860210 addons.go:238] Setting addon dashboard=true in "embed-certs-635679"
	W0127 14:18:23.465290 1860210 addons.go:247] addon dashboard should already be in state true
	I0127 14:18:23.465318 1860210 host.go:66] Checking if "embed-certs-635679" exists ...
	I0127 14:18:23.465505 1860210 addons.go:69] Setting metrics-server=true in profile "embed-certs-635679"
	I0127 14:18:23.465529 1860210 addons.go:238] Setting addon metrics-server=true in "embed-certs-635679"
	W0127 14:18:23.465537 1860210 addons.go:247] addon metrics-server should already be in state true
	I0127 14:18:23.465577 1860210 host.go:66] Checking if "embed-certs-635679" exists ...
	I0127 14:18:23.465620 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:23.465655 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:23.465703 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:23.465737 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:23.468726 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:23.468782 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:23.464353 1860210 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 14:18:23.472272 1860210 out.go:177] * Verifying Kubernetes components...
	I0127 14:18:23.473717 1860210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:18:23.486905 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0127 14:18:23.487573 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:23.488533 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:18:23.488564 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:23.489646 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:23.492416 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:23.492479 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:23.494948 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I0127 14:18:23.498090 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:23.499354 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:18:23.499372 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:23.499777 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:23.501693 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
	I0127 14:18:23.507108 1860210 addons.go:238] Setting addon default-storageclass=true in "embed-certs-635679"
	W0127 14:18:23.507133 1860210 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:18:23.507169 1860210 host.go:66] Checking if "embed-certs-635679" exists ...
	I0127 14:18:23.507561 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:23.507596 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:23.507842 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I0127 14:18:23.508334 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:23.508998 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:18:23.509028 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:23.509419 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:23.509702 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34095
	I0127 14:18:23.510237 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:23.510277 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:23.510654 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I0127 14:18:23.510992 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:23.511486 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:23.511540 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:18:23.511559 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:23.511969 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:23.512065 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:18:23.512083 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:23.512416 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:23.512492 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
	I0127 14:18:23.513009 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:23.513061 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:23.515440 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:18:23.517429 1860210 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:18:23.518694 1860210 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:18:23.518719 1860210 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:18:23.518762 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:18:23.522925 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:18:23.523432 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:18:23.523476 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:18:23.523800 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:18:23.524027 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:18:23.524224 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:18:23.524363 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
	I0127 14:18:23.527536 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45997
	I0127 14:18:23.528108 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:23.528643 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:18:23.528663 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:23.529143 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:23.529762 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:23.529807 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:23.538135 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35569
	I0127 14:18:23.538598 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:23.539117 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:18:23.539136 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:23.539501 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:23.539694 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
	I0127 14:18:23.541494 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:18:23.543428 1860210 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:18:23.544588 1860210 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:18:23.544972 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36059
	I0127 14:18:23.545476 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:23.545705 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:18:23.545726 1860210 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:18:23.545748 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:18:23.546012 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:18:23.546035 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:23.546451 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:23.546625 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
	I0127 14:18:23.548509 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:18:23.549640 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:18:23.550013 1860210 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:18:23.550215 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:18:23.550237 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:18:23.550507 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:18:23.550727 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:18:23.550982 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:18:23.551131 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
	I0127 14:18:23.551683 1860210 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:18:23.551699 1860210 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:18:23.551714 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:18:23.554782 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37165
	I0127 14:18:23.555098 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:18:23.555841 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:18:23.555993 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:18:23.555996 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:18:23.556008 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:18:23.556074 1860210 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:23.556171 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:18:23.556323 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
	I0127 14:18:23.556582 1860210 main.go:141] libmachine: Using API Version  1
	I0127 14:18:23.556602 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:23.557022 1860210 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:23.557197 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
	I0127 14:18:23.558779 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
	I0127 14:18:23.559006 1860210 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:18:23.559020 1860210 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:18:23.559039 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
	I0127 14:18:23.562487 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:18:23.562891 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
	I0127 14:18:23.562925 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
	I0127 14:18:23.563172 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
	I0127 14:18:23.563357 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
	I0127 14:18:23.563516 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
	I0127 14:18:23.563641 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
	I0127 14:18:23.757691 1860210 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:18:23.782030 1860210 node_ready.go:35] waiting up to 6m0s for node "embed-certs-635679" to be "Ready" ...
	I0127 14:18:23.817711 1860210 node_ready.go:49] node "embed-certs-635679" has status "Ready":"True"
	I0127 14:18:23.817741 1860210 node_ready.go:38] duration metric: took 35.669892ms for node "embed-certs-635679" to be "Ready" ...
	I0127 14:18:23.817752 1860210 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:18:23.859312 1860210 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-52k8k" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:23.889570 1860210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:18:23.894297 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:18:23.894322 1860210 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:18:23.961705 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:18:23.961741 1860210 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:18:23.980733 1860210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:18:24.000036 1860210 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:18:24.000069 1860210 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:18:24.014883 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:18:24.014916 1860210 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:18:24.046102 1860210 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:18:24.046137 1860210 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:18:24.084833 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:18:24.084873 1860210 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:18:24.149628 1860210 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:18:24.149663 1860210 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:18:24.254695 1860210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:18:24.289523 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:18:24.289558 1860210 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:18:24.398702 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:18:24.398835 1860210 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:18:24.441678 1860210 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:24.441738 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
	I0127 14:18:24.442877 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
	I0127 14:18:24.442908 1860210 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:24.442961 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:24.442981 1860210 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:24.443016 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
	I0127 14:18:24.443437 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
	I0127 14:18:24.443453 1860210 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:24.443509 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:24.467985 1860210 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:24.468017 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
	I0127 14:18:24.468404 1860210 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:24.468469 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:24.520080 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:18:24.520127 1860210 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:18:24.566543 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:18:24.566583 1860210 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:18:24.694053 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:18:24.694088 1860210 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:18:24.797378 1860210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:18:25.171642 1860210 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.19083972s)
	I0127 14:18:25.171700 1860210 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:25.171712 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
	I0127 14:18:25.172020 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
	I0127 14:18:25.173376 1860210 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:25.173397 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:25.173415 1860210 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:25.173425 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
	I0127 14:18:25.173721 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
	I0127 14:18:25.173726 1860210 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:25.173783 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:25.469119 1860210 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.214292891s)
	I0127 14:18:25.469195 1860210 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:25.469216 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
	I0127 14:18:25.469532 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
	I0127 14:18:25.469545 1860210 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:25.469562 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:25.469573 1860210 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:25.469581 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
	I0127 14:18:25.469925 1860210 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:25.469946 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:25.469960 1860210 addons.go:479] Verifying addon metrics-server=true in "embed-certs-635679"
	I0127 14:18:25.866472 1860210 pod_ready.go:103] pod "coredns-668d6bf9bc-52k8k" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:26.857958 1860210 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.06051743s)
	I0127 14:18:26.858077 1860210 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:26.858099 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
	I0127 14:18:26.858508 1860210 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:26.858535 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
	I0127 14:18:26.858543 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:26.858557 1860210 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:26.858564 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
	I0127 14:18:26.859006 1860210 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:26.859020 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:26.860592 1860210 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-635679 addons enable metrics-server
	
	I0127 14:18:26.861892 1860210 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 14:18:26.863115 1860210 addons.go:514] duration metric: took 3.398732326s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 14:18:28.369038 1860210 pod_ready.go:93] pod "coredns-668d6bf9bc-52k8k" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:28.369069 1860210 pod_ready.go:82] duration metric: took 4.509722512s for pod "coredns-668d6bf9bc-52k8k" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:28.369083 1860210 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-vn9c5" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:30.378207 1860210 pod_ready.go:103] pod "coredns-668d6bf9bc-vn9c5" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:32.845308 1860210 pod_ready.go:103] pod "coredns-668d6bf9bc-vn9c5" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:34.383070 1860210 pod_ready.go:93] pod "coredns-668d6bf9bc-vn9c5" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:34.383099 1860210 pod_ready.go:82] duration metric: took 6.014008774s for pod "coredns-668d6bf9bc-vn9c5" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.383110 1860210 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.418534 1860210 pod_ready.go:93] pod "etcd-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:34.418566 1860210 pod_ready.go:82] duration metric: took 35.44003ms for pod "etcd-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.418579 1860210 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.444912 1860210 pod_ready.go:93] pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:34.444937 1860210 pod_ready.go:82] duration metric: took 26.350357ms for pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.444948 1860210 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.455394 1860210 pod_ready.go:93] pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:34.455417 1860210 pod_ready.go:82] duration metric: took 10.46086ms for pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.455430 1860210 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k2hsk" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.467062 1860210 pod_ready.go:93] pod "kube-proxy-k2hsk" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:34.467097 1860210 pod_ready.go:82] duration metric: took 11.657705ms for pod "kube-proxy-k2hsk" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.467111 1860210 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.774042 1860210 pod_ready.go:93] pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:34.774078 1860210 pod_ready.go:82] duration metric: took 306.957006ms for pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:34.774099 1860210 pod_ready.go:39] duration metric: took 10.9563322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:18:34.774123 1860210 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:18:34.774200 1860210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:18:34.791682 1860210 api_server.go:72] duration metric: took 11.322661462s to wait for apiserver process to appear ...
	I0127 14:18:34.791712 1860210 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:18:34.791737 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
	I0127 14:18:34.796797 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 200:
	ok
	I0127 14:18:34.798034 1860210 api_server.go:141] control plane version: v1.32.1
	I0127 14:18:34.798065 1860210 api_server.go:131] duration metric: took 6.344197ms to wait for apiserver health ...
	I0127 14:18:34.798075 1860210 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:18:34.979734 1860210 system_pods.go:59] 9 kube-system pods found
	I0127 14:18:34.979775 1860210 system_pods.go:61] "coredns-668d6bf9bc-52k8k" [b4744653-9cf8-4fda-a7d5-85bba4da019f] Running
	I0127 14:18:34.979795 1860210 system_pods.go:61] "coredns-668d6bf9bc-vn9c5" [50b23903-1e83-4fbc-b1b9-101a646663c5] Running
	I0127 14:18:34.979801 1860210 system_pods.go:61] "etcd-embed-certs-635679" [7d89dace-a11c-4983-b4ca-80b29d020f4b] Running
	I0127 14:18:34.979806 1860210 system_pods.go:61] "kube-apiserver-embed-certs-635679" [66c0f79b-d0c6-4f3d-9694-02509dd94348] Running
	I0127 14:18:34.979812 1860210 system_pods.go:61] "kube-controller-manager-embed-certs-635679" [63e7d07f-b74b-461a-9a1a-0a9adc3ecb40] Running
	I0127 14:18:34.979817 1860210 system_pods.go:61] "kube-proxy-k2hsk" [a0d30935-bb79-44b5-b061-3b6fcc12ae42] Running
	I0127 14:18:34.979821 1860210 system_pods.go:61] "kube-scheduler-embed-certs-635679" [ca49b72b-d7a3-4f81-9c1d-fa1cc176387c] Running
	I0127 14:18:34.979830 1860210 system_pods.go:61] "metrics-server-f79f97bbb-7xqnn" [2fae80e8-5118-461e-b160-d384bf083f0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:18:34.979840 1860210 system_pods.go:61] "storage-provisioner" [0bdc72ce-c65f-4aca-b113-eff101fc04ad] Running
	I0127 14:18:34.979851 1860210 system_pods.go:74] duration metric: took 181.768087ms to wait for pod list to return data ...
	I0127 14:18:34.979870 1860210 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:18:35.174207 1860210 default_sa.go:45] found service account: "default"
	I0127 14:18:35.174246 1860210 default_sa.go:55] duration metric: took 194.367344ms for default service account to be created ...
	I0127 14:18:35.174261 1860210 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:18:35.377677 1860210 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-635679 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-635679 -n embed-certs-635679
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-635679 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-635679 logs -n 25: (1.191313853s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:12 UTC | 27 Jan 25 14:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-635679                 | embed-certs-635679           | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-635679                                  | embed-certs-635679           | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-591346                  | no-preload-591346            | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-591346                                   | no-preload-591346            | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-212529       | default-k8s-diff-port-212529 | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-212529 | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC |                     |
	|         | default-k8s-diff-port-212529                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-908018             | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-908018 image                           | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	| delete  | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	| start   | -p newest-cni-309688 --memory=2200 --alsologtostderr   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-309688             | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-309688                  | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-309688 --memory=2200 --alsologtostderr   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-309688 image list                           | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	| delete  | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	| delete  | -p no-preload-591346                                   | no-preload-591346            | jenkins | v1.35.0 | 27 Jan 25 14:40 UTC | 27 Jan 25 14:40 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:18:41
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:18:41.854015 1863329 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:18:41.854179 1863329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:18:41.854190 1863329 out.go:358] Setting ErrFile to fd 2...
	I0127 14:18:41.854197 1863329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:18:41.854387 1863329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 14:18:41.855024 1863329 out.go:352] Setting JSON to false
	I0127 14:18:41.856109 1863329 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":39663,"bootTime":1737947859,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:18:41.856224 1863329 start.go:139] virtualization: kvm guest
	I0127 14:18:41.858116 1863329 out.go:177] * [newest-cni-309688] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:18:41.859411 1863329 notify.go:220] Checking for updates...
	I0127 14:18:41.859457 1863329 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:18:41.860616 1863329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:18:41.861927 1863329 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:18:41.863092 1863329 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 14:18:41.864171 1863329 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:18:41.865251 1863329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:18:41.866889 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:18:41.867384 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:41.867442 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:41.883915 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0127 14:18:41.884516 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:41.885154 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:18:41.885177 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:41.885640 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:41.885855 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:18:41.886202 1863329 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:18:41.886661 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:41.886728 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:41.904702 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
	I0127 14:18:41.905242 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:41.905789 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:18:41.905815 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:41.906241 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:41.906460 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:18:41.947119 1863329 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:18:41.948433 1863329 start.go:297] selected driver: kvm2
	I0127 14:18:41.948449 1863329 start.go:901] validating driver "kvm2" against &{Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:18:41.948615 1863329 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:18:41.949339 1863329 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:18:41.949417 1863329 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-1798877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:18:41.966476 1863329 install.go:137] /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:18:41.966978 1863329 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 14:18:41.967016 1863329 cni.go:84] Creating CNI manager for ""
	I0127 14:18:41.967062 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:18:41.967095 1863329 start.go:340] cluster config:
	{Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:18:41.967211 1863329 iso.go:125] acquiring lock: {Name:mk3326e4e64b9d95edc1453384276c21a2957c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:18:41.969136 1863329 out.go:177] * Starting "newest-cni-309688" primary control-plane node in "newest-cni-309688" cluster
	I0127 14:18:41.970047 1863329 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:18:41.970083 1863329 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 14:18:41.970090 1863329 cache.go:56] Caching tarball of preloaded images
	I0127 14:18:41.970203 1863329 preload.go:172] Found /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 14:18:41.970215 1863329 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 14:18:41.970348 1863329 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/config.json ...
	I0127 14:18:41.970570 1863329 start.go:360] acquireMachinesLock for newest-cni-309688: {Name:mk6fcac41a7a21b211b65e56994e625852d1a781 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:18:41.970626 1863329 start.go:364] duration metric: took 32.288µs to acquireMachinesLock for "newest-cni-309688"
	I0127 14:18:41.970646 1863329 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:18:41.970657 1863329 fix.go:54] fixHost starting: 
	I0127 14:18:41.971072 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:41.971127 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:41.987333 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0127 14:18:41.987957 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:41.988457 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:18:41.988482 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:41.988963 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:41.989252 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:18:41.989407 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:18:41.991188 1863329 fix.go:112] recreateIfNeeded on newest-cni-309688: state=Stopped err=<nil>
	I0127 14:18:41.991220 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	W0127 14:18:41.991396 1863329 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:18:41.993400 1863329 out.go:177] * Restarting existing kvm2 VM for "newest-cni-309688" ...
	I0127 14:18:39.739774 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:18:39.739799 1860441 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:18:39.776579 1860441 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:18:39.776612 1860441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:18:39.821641 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:18:39.821669 1860441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:18:39.837528 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:18:39.899562 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:18:39.899592 1860441 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:18:39.941841 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:18:39.941883 1860441 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:18:39.958020 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:18:39.958049 1860441 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:18:39.985706 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:18:39.985733 1860441 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:18:40.018166 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:18:40.018198 1860441 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:18:40.049338 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:18:40.335449 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335486 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335522 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335544 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335886 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.335906 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.335921 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.335932 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335939 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335940 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.336011 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336058 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.336071 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.336079 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.336199 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336202 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.336210 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.336321 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336339 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.361215 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.361236 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.361528 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.361572 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.361588 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.976702 1860441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139130092s)
	I0127 14:18:40.976753 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.976768 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.977190 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.977233 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.977244 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.977254 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.977278 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.977544 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.977626 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.977659 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.977685 1860441 addons.go:479] Verifying addon metrics-server=true in "no-preload-591346"
	I0127 14:18:41.537877 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:41.993401 1860441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.943993844s)
	I0127 14:18:41.993457 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:41.993474 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:41.993713 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:41.993737 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:41.993755 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:41.993778 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:41.993785 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:41.994133 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:41.994158 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:41.994172 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:41.995251 1860441 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-591346 addons enable metrics-server
	
	I0127 14:18:41.996556 1860441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 14:18:41.997692 1860441 addons.go:514] duration metric: took 2.74201161s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 14:18:43.539748 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:40.906503 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:42.906895 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:45.405827 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:41.996357 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Start
	I0127 14:18:41.996613 1863329 main.go:141] libmachine: (newest-cni-309688) starting domain...
	I0127 14:18:41.996630 1863329 main.go:141] libmachine: (newest-cni-309688) ensuring networks are active...
	I0127 14:18:41.997620 1863329 main.go:141] libmachine: (newest-cni-309688) Ensuring network default is active
	I0127 14:18:41.998106 1863329 main.go:141] libmachine: (newest-cni-309688) Ensuring network mk-newest-cni-309688 is active
	I0127 14:18:41.998535 1863329 main.go:141] libmachine: (newest-cni-309688) getting domain XML...
	I0127 14:18:41.999349 1863329 main.go:141] libmachine: (newest-cni-309688) creating domain...
	I0127 14:18:43.362085 1863329 main.go:141] libmachine: (newest-cni-309688) waiting for IP...
	I0127 14:18:43.363264 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:43.363792 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:43.363901 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.363777 1863364 retry.go:31] will retry after 245.978549ms: waiting for domain to come up
	I0127 14:18:43.611613 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:43.612280 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:43.612314 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.612267 1863364 retry.go:31] will retry after 277.473907ms: waiting for domain to come up
	I0127 14:18:43.891925 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:43.892577 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:43.892608 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.892527 1863364 retry.go:31] will retry after 327.737062ms: waiting for domain to come up
	I0127 14:18:44.221804 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:44.222337 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:44.222385 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:44.222298 1863364 retry.go:31] will retry after 472.286938ms: waiting for domain to come up
	I0127 14:18:44.695804 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:44.696473 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:44.696498 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:44.696438 1863364 retry.go:31] will retry after 556.965256ms: waiting for domain to come up
	I0127 14:18:45.254693 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:45.255242 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:45.255276 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:45.255189 1863364 retry.go:31] will retry after 809.038394ms: waiting for domain to come up
	I0127 14:18:46.066036 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:46.066585 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:46.066616 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:46.066540 1863364 retry.go:31] will retry after 758.303359ms: waiting for domain to come up
	I0127 14:18:46.826373 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:46.826997 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:46.827029 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:46.826933 1863364 retry.go:31] will retry after 1.102767077s: waiting for domain to come up
	I0127 14:18:46.040082 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:47.537709 1860441 pod_ready.go:93] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.537735 1860441 pod_ready.go:82] duration metric: took 8.005981983s for pod "etcd-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.537745 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.545174 1860441 pod_ready.go:93] pod "kube-apiserver-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.545199 1860441 pod_ready.go:82] duration metric: took 7.447836ms for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.545210 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.564920 1860441 pod_ready.go:93] pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.564957 1860441 pod_ready.go:82] duration metric: took 19.735587ms for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.564973 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k69dv" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.588782 1860441 pod_ready.go:93] pod "kube-proxy-k69dv" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.588811 1860441 pod_ready.go:82] duration metric: took 23.829861ms for pod "kube-proxy-k69dv" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.588824 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.598620 1860441 pod_ready.go:93] pod "kube-scheduler-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.598656 1860441 pod_ready.go:82] duration metric: took 9.822306ms for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.598668 1860441 pod_ready.go:39] duration metric: took 8.076081083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:18:47.598693 1860441 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:18:47.598793 1860441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:18:47.615862 1860441 api_server.go:72] duration metric: took 8.36019503s to wait for apiserver process to appear ...
	I0127 14:18:47.615895 1860441 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:18:47.615918 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:18:47.631872 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0127 14:18:47.632742 1860441 api_server.go:141] control plane version: v1.32.1
	I0127 14:18:47.632766 1860441 api_server.go:131] duration metric: took 16.863539ms to wait for apiserver health ...
	I0127 14:18:47.632774 1860441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:18:47.739770 1860441 system_pods.go:59] 9 kube-system pods found
	I0127 14:18:47.739814 1860441 system_pods.go:61] "coredns-668d6bf9bc-cm66w" [97ffe415-a70c-44a4-aa07-5b99576c749d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:18:47.739824 1860441 system_pods.go:61] "coredns-668d6bf9bc-lq9hg" [688b4191-8c28-440b-bc93-d52964fe105c] Running
	I0127 14:18:47.739833 1860441 system_pods.go:61] "etcd-no-preload-591346" [01ae260c-cbf6-4f04-be4e-565f3f408c45] Running
	I0127 14:18:47.739838 1860441 system_pods.go:61] "kube-apiserver-no-preload-591346" [1433350f-5302-42e1-8763-0f8bbde34676] Running
	I0127 14:18:47.739842 1860441 system_pods.go:61] "kube-controller-manager-no-preload-591346" [49eab0a5-09c9-4a2d-9913-1b45c145b52a] Running
	I0127 14:18:47.739846 1860441 system_pods.go:61] "kube-proxy-k69dv" [393d6681-7d87-479a-94d3-5ff6cbfe16ed] Running
	I0127 14:18:47.739849 1860441 system_pods.go:61] "kube-scheduler-no-preload-591346" [9f5af2ad-71a3-4481-a18a-8477f843553a] Running
	I0127 14:18:47.739855 1860441 system_pods.go:61] "metrics-server-f79f97bbb-fqckz" [30644e2b-7988-4b55-aa94-fe774b820ed4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:18:47.739859 1860441 system_pods.go:61] "storage-provisioner" [f10d2d4c-7f96-4ff6-b6ae-71b7918fd3ee] Running
	I0127 14:18:47.739866 1860441 system_pods.go:74] duration metric: took 107.08564ms to wait for pod list to return data ...
	I0127 14:18:47.739874 1860441 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:18:47.936494 1860441 default_sa.go:45] found service account: "default"
	I0127 14:18:47.936524 1860441 default_sa.go:55] duration metric: took 196.641742ms for default service account to be created ...
	I0127 14:18:47.936536 1860441 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:18:48.139726 1860441 system_pods.go:87] 9 kube-system pods found
	I0127 14:18:47.405959 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:49.408149 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:47.931337 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:47.931793 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:47.931838 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:47.931776 1863364 retry.go:31] will retry after 1.120510293s: waiting for domain to come up
	I0127 14:18:49.053548 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:49.054204 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:49.054231 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:49.054156 1863364 retry.go:31] will retry after 1.733549309s: waiting for domain to come up
	I0127 14:18:50.790083 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:50.790567 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:50.790650 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:50.790566 1863364 retry.go:31] will retry after 1.990202359s: waiting for domain to come up
	I0127 14:18:51.906048 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:53.906496 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:52.782229 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:52.782850 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:52.782892 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:52.782738 1863364 retry.go:31] will retry after 2.327681841s: waiting for domain to come up
	I0127 14:18:55.113291 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:55.113832 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:55.113864 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:55.113778 1863364 retry.go:31] will retry after 3.526138042s: waiting for domain to come up
	I0127 14:18:55.906587 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:58.405047 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:58.641406 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:58.642022 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:58.642056 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:58.641994 1863364 retry.go:31] will retry after 5.217691775s: waiting for domain to come up
	I0127 14:19:00.906487 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:03.405134 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:05.405708 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:03.862320 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.862779 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has current primary IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.862804 1863329 main.go:141] libmachine: (newest-cni-309688) found domain IP: 192.168.72.17
	I0127 14:19:03.862815 1863329 main.go:141] libmachine: (newest-cni-309688) reserving static IP address...
	I0127 14:19:03.863295 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "newest-cni-309688", mac: "52:54:00:1b:25:ab", ip: "192.168.72.17"} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.863323 1863329 main.go:141] libmachine: (newest-cni-309688) reserved static IP address 192.168.72.17 for domain newest-cni-309688
	I0127 14:19:03.863342 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | skip adding static IP to network mk-newest-cni-309688 - found existing host DHCP lease matching {name: "newest-cni-309688", mac: "52:54:00:1b:25:ab", ip: "192.168.72.17"}
	I0127 14:19:03.863372 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Getting to WaitForSSH function...
	I0127 14:19:03.863389 1863329 main.go:141] libmachine: (newest-cni-309688) waiting for SSH...
	I0127 14:19:03.865894 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.866214 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.866242 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.866399 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Using SSH client type: external
	I0127 14:19:03.866428 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa (-rw-------)
	I0127 14:19:03.866460 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:19:03.866485 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | About to run SSH command:
	I0127 14:19:03.866510 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | exit 0
	I0127 14:19:03.986391 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | SSH cmd err, output: <nil>: 
	I0127 14:19:03.986778 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetConfigRaw
	I0127 14:19:03.987411 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:03.990205 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.990686 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.990714 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.990989 1863329 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/config.json ...
	I0127 14:19:03.991197 1863329 machine.go:93] provisionDockerMachine start ...
	I0127 14:19:03.991218 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:03.991433 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:03.993663 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.993956 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.994002 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.994179 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:03.994359 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:03.994518 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:03.994653 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:03.994863 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:03.995069 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:03.995080 1863329 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:19:04.094835 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 14:19:04.094866 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
	I0127 14:19:04.095102 1863329 buildroot.go:166] provisioning hostname "newest-cni-309688"
	I0127 14:19:04.095129 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
	I0127 14:19:04.095318 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.097835 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.098248 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.098281 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.098404 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.098576 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.098735 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.098905 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.099088 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:04.099267 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:04.099282 1863329 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-309688 && echo "newest-cni-309688" | sudo tee /etc/hostname
	I0127 14:19:04.213036 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-309688
	
	I0127 14:19:04.213082 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.215824 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.216184 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.216208 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.216357 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.216549 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.216671 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.216793 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.216979 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:04.217204 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:04.217230 1863329 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-309688' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-309688/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-309688' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:19:04.329169 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:19:04.329206 1863329 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-1798877/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-1798877/.minikube}
	I0127 14:19:04.329248 1863329 buildroot.go:174] setting up certificates
	I0127 14:19:04.329259 1863329 provision.go:84] configureAuth start
	I0127 14:19:04.329269 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
	I0127 14:19:04.329540 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:04.332411 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.332850 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.332878 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.333078 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.335728 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.336143 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.336174 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.336351 1863329 provision.go:143] copyHostCerts
	I0127 14:19:04.336415 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem, removing ...
	I0127 14:19:04.336439 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem
	I0127 14:19:04.336527 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem (1078 bytes)
	I0127 14:19:04.336664 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem, removing ...
	I0127 14:19:04.336677 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem
	I0127 14:19:04.336718 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem (1123 bytes)
	I0127 14:19:04.336806 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem, removing ...
	I0127 14:19:04.336817 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem
	I0127 14:19:04.336852 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem (1675 bytes)
	I0127 14:19:04.336995 1863329 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem org=jenkins.newest-cni-309688 san=[127.0.0.1 192.168.72.17 localhost minikube newest-cni-309688]
	I0127 14:19:04.445795 1863329 provision.go:177] copyRemoteCerts
	I0127 14:19:04.445894 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:19:04.445928 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.448735 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.449074 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.449106 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.449317 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.449501 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.449677 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.449816 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.528783 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:19:04.552897 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 14:19:04.575992 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:19:04.598152 1863329 provision.go:87] duration metric: took 268.879651ms to configureAuth
	I0127 14:19:04.598183 1863329 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:19:04.598397 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:19:04.598411 1863329 machine.go:96] duration metric: took 607.201271ms to provisionDockerMachine
	I0127 14:19:04.598421 1863329 start.go:293] postStartSetup for "newest-cni-309688" (driver="kvm2")
	I0127 14:19:04.598437 1863329 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:19:04.598481 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.598842 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:19:04.598874 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.601257 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.601599 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.601628 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.601759 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.601945 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.602093 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.602260 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.685084 1863329 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:19:04.689047 1863329 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:19:04.689081 1863329 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/addons for local assets ...
	I0127 14:19:04.689137 1863329 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/files for local assets ...
	I0127 14:19:04.689212 1863329 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem -> 18060702.pem in /etc/ssl/certs
	I0127 14:19:04.689300 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:19:04.698109 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:19:04.723269 1863329 start.go:296] duration metric: took 124.828224ms for postStartSetup
	I0127 14:19:04.723315 1863329 fix.go:56] duration metric: took 22.752659687s for fixHost
	I0127 14:19:04.723339 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.726123 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.726570 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.726601 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.726820 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.727042 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.727229 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.727405 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.727627 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:04.727869 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:04.727885 1863329 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:19:04.831094 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987544.794055340
	
	I0127 14:19:04.831118 1863329 fix.go:216] guest clock: 1737987544.794055340
	I0127 14:19:04.831124 1863329 fix.go:229] Guest: 2025-01-27 14:19:04.79405534 +0000 UTC Remote: 2025-01-27 14:19:04.723319581 +0000 UTC m=+22.912787075 (delta=70.735759ms)
	I0127 14:19:04.831145 1863329 fix.go:200] guest clock delta is within tolerance: 70.735759ms
	I0127 14:19:04.831149 1863329 start.go:83] releasing machines lock for "newest-cni-309688", held for 22.860512585s
	I0127 14:19:04.831167 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.831433 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:04.834349 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.834694 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.834718 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.834915 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.835447 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.835626 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.835729 1863329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:19:04.835772 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.835799 1863329 ssh_runner.go:195] Run: cat /version.json
	I0127 14:19:04.835821 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.838501 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.838695 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.838855 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.838881 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.839077 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.839082 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.839117 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.839262 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.839272 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.839481 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.839482 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.839635 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.839648 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.839742 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.942379 1863329 ssh_runner.go:195] Run: systemctl --version
	I0127 14:19:04.948168 1863329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:19:04.953645 1863329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:19:04.953703 1863329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:19:04.969617 1863329 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:19:04.969646 1863329 start.go:495] detecting cgroup driver to use...
	I0127 14:19:04.969742 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 14:19:05.001151 1863329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 14:19:05.014859 1863329 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:19:05.014928 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:19:05.030145 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:19:05.044008 1863329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:19:05.174941 1863329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:19:05.330526 1863329 docker.go:233] disabling docker service ...
	I0127 14:19:05.330619 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:19:05.345183 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:19:05.357628 1863329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:19:05.474635 1863329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:19:05.587063 1863329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:19:05.600224 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:19:05.616896 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 14:19:05.628539 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 14:19:05.639531 1863329 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 14:19:05.639605 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 14:19:05.649978 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:19:05.659986 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 14:19:05.669665 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:19:05.680018 1863329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:19:05.690041 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 14:19:05.699586 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 14:19:05.709482 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 14:19:05.719643 1863329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:19:05.728454 1863329 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:19:05.728520 1863329 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:19:05.743292 1863329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:19:05.752875 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:05.862682 1863329 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 14:19:05.897001 1863329 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 14:19:05.897074 1863329 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:19:05.901946 1863329 retry.go:31] will retry after 1.257073282s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 14:19:07.159917 1863329 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:19:07.165117 1863329 start.go:563] Will wait 60s for crictl version
	I0127 14:19:07.165209 1863329 ssh_runner.go:195] Run: which crictl
	I0127 14:19:07.168995 1863329 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:19:07.209167 1863329 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 14:19:07.209244 1863329 ssh_runner.go:195] Run: containerd --version
	I0127 14:19:07.236320 1863329 ssh_runner.go:195] Run: containerd --version
	I0127 14:19:07.261054 1863329 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 14:19:07.262245 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:07.265288 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:07.265739 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:07.265772 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:07.265980 1863329 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 14:19:07.270111 1863329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:19:07.283905 1863329 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 14:19:07.406716 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:09.905446 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:07.285143 1863329 kubeadm.go:883] updating cluster {Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:19:07.285271 1863329 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:19:07.285342 1863329 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:19:07.314913 1863329 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:19:07.314944 1863329 containerd.go:534] Images already preloaded, skipping extraction
	I0127 14:19:07.315010 1863329 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:19:07.345742 1863329 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:19:07.345770 1863329 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:19:07.345779 1863329 kubeadm.go:934] updating node { 192.168.72.17 8443 v1.32.1 containerd true true} ...
	I0127 14:19:07.345897 1863329 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-309688 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:19:07.345956 1863329 ssh_runner.go:195] Run: sudo crictl info
	I0127 14:19:07.379712 1863329 cni.go:84] Creating CNI manager for ""
	I0127 14:19:07.379740 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:19:07.379759 1863329 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 14:19:07.379800 1863329 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.17 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-309688 NodeName:newest-cni-309688 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:19:07.379979 1863329 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-309688"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.17"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.17"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:19:07.380049 1863329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:19:07.390315 1863329 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:19:07.390456 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:19:07.399585 1863329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0127 14:19:07.417531 1863329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:19:07.433514 1863329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 14:19:07.449318 1863329 ssh_runner.go:195] Run: grep 192.168.72.17	control-plane.minikube.internal$ /etc/hosts
	I0127 14:19:07.452848 1863329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:19:07.464375 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:07.590492 1863329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:19:07.609018 1863329 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688 for IP: 192.168.72.17
	I0127 14:19:07.609048 1863329 certs.go:194] generating shared ca certs ...
	I0127 14:19:07.609072 1863329 certs.go:226] acquiring lock for ca certs: {Name:mkc6b95fb3d2c0d0c7049cde446028a0d731f231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:07.609277 1863329 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key
	I0127 14:19:07.609328 1863329 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key
	I0127 14:19:07.609339 1863329 certs.go:256] generating profile certs ...
	I0127 14:19:07.609434 1863329 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/client.key
	I0127 14:19:07.609500 1863329 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.key.54b7a6ae
	I0127 14:19:07.609534 1863329 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.key
	I0127 14:19:07.609661 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem (1338 bytes)
	W0127 14:19:07.609700 1863329 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070_empty.pem, impossibly tiny 0 bytes
	I0127 14:19:07.609707 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:19:07.609732 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:19:07.609776 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:19:07.609807 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem (1675 bytes)
	I0127 14:19:07.609872 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:19:07.613389 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:19:07.649675 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:19:07.678577 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:19:07.707466 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 14:19:07.736820 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 14:19:07.764078 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:19:07.791040 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:19:07.817979 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:19:07.846978 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:19:07.869002 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem --> /usr/share/ca-certificates/1806070.pem (1338 bytes)
	I0127 14:19:07.892530 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /usr/share/ca-certificates/18060702.pem (1708 bytes)
	I0127 14:19:07.917138 1863329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:19:07.933638 1863329 ssh_runner.go:195] Run: openssl version
	I0127 14:19:07.939662 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:19:07.951267 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:07.955439 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:07.955494 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:07.961014 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:19:07.972145 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806070.pem && ln -fs /usr/share/ca-certificates/1806070.pem /etc/ssl/certs/1806070.pem"
	I0127 14:19:07.983517 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806070.pem
	I0127 14:19:07.987671 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:10 /usr/share/ca-certificates/1806070.pem
	I0127 14:19:07.987719 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806070.pem
	I0127 14:19:07.993079 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806070.pem /etc/ssl/certs/51391683.0"
	I0127 14:19:08.004139 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18060702.pem && ln -fs /usr/share/ca-certificates/18060702.pem /etc/ssl/certs/18060702.pem"
	I0127 14:19:08.015248 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18060702.pem
	I0127 14:19:08.019068 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:10 /usr/share/ca-certificates/18060702.pem
	I0127 14:19:08.019113 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18060702.pem
	I0127 14:19:08.024062 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18060702.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:19:08.033948 1863329 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:19:08.038251 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:19:08.043547 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:19:08.048804 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:19:08.054182 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:19:08.059290 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:19:08.064227 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:19:08.069315 1863329 kubeadm.go:392] StartCluster: {Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:19:08.069441 1863329 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 14:19:08.069490 1863329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:19:08.106407 1863329 cri.go:89] found id: "44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666"
	I0127 14:19:08.106434 1863329 cri.go:89] found id: "d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186"
	I0127 14:19:08.106441 1863329 cri.go:89] found id: "7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf"
	I0127 14:19:08.106446 1863329 cri.go:89] found id: "72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199"
	I0127 14:19:08.106450 1863329 cri.go:89] found id: "0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698"
	I0127 14:19:08.106455 1863329 cri.go:89] found id: "0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63"
	I0127 14:19:08.106459 1863329 cri.go:89] found id: "2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb"
	I0127 14:19:08.106463 1863329 cri.go:89] found id: "89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe"
	I0127 14:19:08.106467 1863329 cri.go:89] found id: ""
	I0127 14:19:08.106525 1863329 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 14:19:08.121718 1863329 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T14:19:08Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 14:19:08.121817 1863329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:19:08.131128 1863329 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 14:19:08.131152 1863329 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 14:19:08.131206 1863329 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 14:19:08.141323 1863329 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:19:08.142436 1863329 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-309688" does not appear in /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:19:08.143126 1863329 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-1798877/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-309688" cluster setting kubeconfig missing "newest-cni-309688" context setting]
	I0127 14:19:08.144090 1863329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:08.145938 1863329 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 14:19:08.155827 1863329 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.17
	I0127 14:19:08.155862 1863329 kubeadm.go:1160] stopping kube-system containers ...
	I0127 14:19:08.155887 1863329 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 14:19:08.155943 1863329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:19:08.191753 1863329 cri.go:89] found id: "44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666"
	I0127 14:19:08.191787 1863329 cri.go:89] found id: "d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186"
	I0127 14:19:08.191794 1863329 cri.go:89] found id: "7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf"
	I0127 14:19:08.191799 1863329 cri.go:89] found id: "72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199"
	I0127 14:19:08.191804 1863329 cri.go:89] found id: "0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698"
	I0127 14:19:08.191808 1863329 cri.go:89] found id: "0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63"
	I0127 14:19:08.191812 1863329 cri.go:89] found id: "2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb"
	I0127 14:19:08.191817 1863329 cri.go:89] found id: "89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe"
	I0127 14:19:08.191822 1863329 cri.go:89] found id: ""
	I0127 14:19:08.191829 1863329 cri.go:252] Stopping containers: [44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666 d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186 7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf 72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199 0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698 0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63 2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb 89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe]
	I0127 14:19:08.191909 1863329 ssh_runner.go:195] Run: which crictl
	I0127 14:19:08.195781 1863329 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666 d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186 7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf 72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199 0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698 0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63 2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb 89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe
	I0127 14:19:08.232200 1863329 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 14:19:08.248830 1863329 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:19:08.258186 1863329 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:19:08.258248 1863329 kubeadm.go:157] found existing configuration files:
	
	I0127 14:19:08.258301 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:19:08.266710 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:19:08.266787 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:19:08.276679 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:19:08.285327 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:19:08.285384 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:19:08.293919 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:19:08.302352 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:19:08.302466 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:19:08.314481 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:19:08.324318 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:19:08.324378 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:19:08.333925 1863329 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:19:08.343981 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:08.484856 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.407056 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.612649 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.691321 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.780355 1863329 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:19:09.780450 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:10.281441 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:10.780982 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:10.803824 1863329 api_server.go:72] duration metric: took 1.023465596s to wait for apiserver process to appear ...
	I0127 14:19:10.803860 1863329 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:19:10.803886 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:10.804578 1863329 api_server.go:269] stopped: https://192.168.72.17:8443/healthz: Get "https://192.168.72.17:8443/healthz": dial tcp 192.168.72.17:8443: connect: connection refused
	I0127 14:19:11.304934 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:11.906081 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:13.906183 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:13.554007 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:19:13.554040 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:19:13.554061 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:13.596380 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:19:13.596419 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:19:13.804894 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:13.819580 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:13.819610 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:14.304214 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:14.309598 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:14.309627 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:14.804236 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:14.809512 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:14.809551 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:15.304181 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:15.309590 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:15.309618 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:15.803958 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:15.813848 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:15.813901 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:16.304624 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:16.310313 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:16.310345 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:16.804590 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:16.809168 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 200:
	ok
	I0127 14:19:16.816088 1863329 api_server.go:141] control plane version: v1.32.1
	I0127 14:19:16.816123 1863329 api_server.go:131] duration metric: took 6.012253595s to wait for apiserver health ...
	I0127 14:19:16.816135 1863329 cni.go:84] Creating CNI manager for ""
	I0127 14:19:16.816144 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:19:16.817843 1863329 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:19:16.819038 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:19:16.829479 1863329 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:19:16.847164 1863329 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:19:16.857140 1863329 system_pods.go:59] 9 kube-system pods found
	I0127 14:19:16.857176 1863329 system_pods.go:61] "coredns-668d6bf9bc-f66f4" [b30ba9c6-eb6e-44c1-b389-96263bb405a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:16.857187 1863329 system_pods.go:61] "coredns-668d6bf9bc-pt6d2" [d8cf3c75-3646-40d8-8131-efd331e2cec7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:16.857198 1863329 system_pods.go:61] "etcd-newest-cni-309688" [f963f636-8186-4dd8-8263-b7bc29d15bc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 14:19:16.857210 1863329 system_pods.go:61] "kube-apiserver-newest-cni-309688" [90584ec0-2731-48e1-a2e6-5bef7b170386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 14:19:16.857219 1863329 system_pods.go:61] "kube-controller-manager-newest-cni-309688" [ef3b3e37-08d8-48aa-a55e-55d4c87c8189] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 14:19:16.857227 1863329 system_pods.go:61] "kube-proxy-8mwp9" [ebb658f3-eba2-4743-94cd-da996046bd02] Running
	I0127 14:19:16.857236 1863329 system_pods.go:61] "kube-scheduler-newest-cni-309688" [07bcbbe9-474a-4bc2-9f58-f889fa685754] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 14:19:16.857263 1863329 system_pods.go:61] "metrics-server-f79f97bbb-jw4m9" [85929ac8-142c-4bc7-90da-5c13f9ff3c0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:19:16.857277 1863329 system_pods.go:61] "storage-provisioner" [94820502-acf5-4297-8fc9-d4b4953b01ab] Running
	I0127 14:19:16.857287 1863329 system_pods.go:74] duration metric: took 10.102454ms to wait for pod list to return data ...
	I0127 14:19:16.857300 1863329 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:19:16.860835 1863329 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:19:16.860862 1863329 node_conditions.go:123] node cpu capacity is 2
	I0127 14:19:16.860886 1863329 node_conditions.go:105] duration metric: took 3.575582ms to run NodePressure ...
	I0127 14:19:16.860913 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:17.133479 1863329 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:19:17.144656 1863329 ops.go:34] apiserver oom_adj: -16
	I0127 14:19:17.144684 1863329 kubeadm.go:597] duration metric: took 9.013524206s to restartPrimaryControlPlane
	I0127 14:19:17.144695 1863329 kubeadm.go:394] duration metric: took 9.075390076s to StartCluster
	I0127 14:19:17.144715 1863329 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:17.144810 1863329 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:19:17.146498 1863329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:17.146819 1863329 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 14:19:17.146906 1863329 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:19:17.147019 1863329 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-309688"
	I0127 14:19:17.147042 1863329 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-309688"
	I0127 14:19:17.147041 1863329 addons.go:69] Setting default-storageclass=true in profile "newest-cni-309688"
	W0127 14:19:17.147054 1863329 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:19:17.147075 1863329 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-309688"
	I0127 14:19:17.147081 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:19:17.147079 1863329 addons.go:69] Setting dashboard=true in profile "newest-cni-309688"
	I0127 14:19:17.147063 1863329 addons.go:69] Setting metrics-server=true in profile "newest-cni-309688"
	I0127 14:19:17.147150 1863329 addons.go:238] Setting addon metrics-server=true in "newest-cni-309688"
	W0127 14:19:17.147164 1863329 addons.go:247] addon metrics-server should already be in state true
	I0127 14:19:17.147190 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.147088 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.147127 1863329 addons.go:238] Setting addon dashboard=true in "newest-cni-309688"
	W0127 14:19:17.147431 1863329 addons.go:247] addon dashboard should already be in state true
	I0127 14:19:17.147463 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.147523 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147558 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147565 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.147607 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.147687 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147718 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.147797 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147810 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.148440 1863329 out.go:177] * Verifying Kubernetes components...
	I0127 14:19:17.149687 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:17.163903 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0127 14:19:17.164136 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0127 14:19:17.164313 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.164874 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.165122 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.165143 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.165396 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.165415 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.165676 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.165822 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.165886 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.166471 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.166526 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.175217 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I0127 14:19:17.175873 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.176532 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.176558 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.176979 1863329 addons.go:238] Setting addon default-storageclass=true in "newest-cni-309688"
	I0127 14:19:17.176997 1863329 main.go:141] libmachine: () Calling .GetMachineName
	W0127 14:19:17.177002 1863329 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:19:17.177080 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.177500 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.177518 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.177541 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.177556 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.192916 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
	I0127 14:19:17.193458 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.194088 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.194110 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.194524 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.195179 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.195214 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.196238 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0127 14:19:17.196598 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.196918 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0127 14:19:17.197180 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.197200 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.197360 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.197480 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.197523 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35899
	I0127 14:19:17.197802 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.197813 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.198103 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.198164 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.198321 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.198535 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.198583 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.198888 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.198902 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.199305 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.199518 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.200369 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.201165 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.202593 1863329 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:19:17.202676 1863329 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:19:17.203794 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:19:17.203807 1863329 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:19:17.203824 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.203911 1863329 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:19:17.203926 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:19:17.203944 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.207477 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.207978 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.208029 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.208889 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.209077 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.209227 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.209363 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.216222 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.216592 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I0127 14:19:17.216814 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.216831 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.216961 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.217064 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.217256 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.217411 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.217422 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.217463 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.217578 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.217795 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.217839 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0127 14:19:17.218152 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.218203 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.218804 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.218816 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.219270 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.219480 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.219969 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.220954 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.221278 1863329 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:19:17.221291 1863329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:19:17.221312 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.221888 1863329 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:19:17.223572 1863329 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:19:17.225013 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:19:17.225038 1863329 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:19:17.225052 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.225188 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.225554 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.225777 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.225825 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.226023 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.226118 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.226242 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.228625 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.228937 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.228977 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.229171 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.229344 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.229536 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.229794 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.331878 1863329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:19:17.351919 1863329 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:19:17.352011 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:17.365611 1863329 api_server.go:72] duration metric: took 218.744274ms to wait for apiserver process to appear ...
	I0127 14:19:17.365637 1863329 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:19:17.365655 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:17.372023 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 200:
	ok
	I0127 14:19:17.373577 1863329 api_server.go:141] control plane version: v1.32.1
	I0127 14:19:17.373603 1863329 api_server.go:131] duration metric: took 7.959402ms to wait for apiserver health ...
	I0127 14:19:17.373612 1863329 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:19:17.382361 1863329 system_pods.go:59] 9 kube-system pods found
	I0127 14:19:17.382397 1863329 system_pods.go:61] "coredns-668d6bf9bc-f66f4" [b30ba9c6-eb6e-44c1-b389-96263bb405a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:17.382408 1863329 system_pods.go:61] "coredns-668d6bf9bc-pt6d2" [d8cf3c75-3646-40d8-8131-efd331e2cec7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:17.382420 1863329 system_pods.go:61] "etcd-newest-cni-309688" [f963f636-8186-4dd8-8263-b7bc29d15bc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 14:19:17.382430 1863329 system_pods.go:61] "kube-apiserver-newest-cni-309688" [90584ec0-2731-48e1-a2e6-5bef7b170386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 14:19:17.382453 1863329 system_pods.go:61] "kube-controller-manager-newest-cni-309688" [ef3b3e37-08d8-48aa-a55e-55d4c87c8189] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 14:19:17.382460 1863329 system_pods.go:61] "kube-proxy-8mwp9" [ebb658f3-eba2-4743-94cd-da996046bd02] Running
	I0127 14:19:17.382473 1863329 system_pods.go:61] "kube-scheduler-newest-cni-309688" [07bcbbe9-474a-4bc2-9f58-f889fa685754] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 14:19:17.382480 1863329 system_pods.go:61] "metrics-server-f79f97bbb-jw4m9" [85929ac8-142c-4bc7-90da-5c13f9ff3c0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:19:17.382486 1863329 system_pods.go:61] "storage-provisioner" [94820502-acf5-4297-8fc9-d4b4953b01ab] Running
	I0127 14:19:17.382496 1863329 system_pods.go:74] duration metric: took 8.875555ms to wait for pod list to return data ...
	I0127 14:19:17.382507 1863329 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:19:17.385289 1863329 default_sa.go:45] found service account: "default"
	I0127 14:19:17.385310 1863329 default_sa.go:55] duration metric: took 2.794486ms for default service account to be created ...
	I0127 14:19:17.385319 1863329 kubeadm.go:582] duration metric: took 238.459291ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 14:19:17.385341 1863329 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:19:17.388555 1863329 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:19:17.388583 1863329 node_conditions.go:123] node cpu capacity is 2
	I0127 14:19:17.388596 1863329 node_conditions.go:105] duration metric: took 3.249906ms to run NodePressure ...
	I0127 14:19:17.388610 1863329 start.go:241] waiting for startup goroutines ...
	I0127 14:19:17.418149 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:19:17.421312 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:19:17.421340 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:19:17.438395 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:19:17.454881 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:19:17.454907 1863329 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:19:17.463957 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:19:17.463983 1863329 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:19:17.511881 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:19:17.511918 1863329 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:19:17.526875 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:19:17.526902 1863329 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:19:17.564740 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:19:17.593901 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:19:17.593956 1863329 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:19:17.686229 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:19:17.686255 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:19:17.771605 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:19:17.771642 1863329 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:19:17.858960 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:19:17.858995 1863329 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:19:17.968615 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:19:17.968653 1863329 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:19:18.103281 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:19:18.103311 1863329 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:19:18.180707 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:19:18.180741 1863329 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:19:18.229422 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:19:19.526682 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.088226902s)
	I0127 14:19:19.526763 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.526777 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.526802 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962012351s)
	I0127 14:19:19.526851 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.526861 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.108674811s)
	I0127 14:19:19.526875 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.526891 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.526910 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.527161 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.527203 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.527212 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.527219 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.527227 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.528059 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528072 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528080 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.528088 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.528229 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528239 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528293 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.528342 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528349 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528356 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.528362 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.528502 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.528531 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528538 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528548 1863329 addons.go:479] Verifying addon metrics-server=true in "newest-cni-309688"
	I0127 14:19:19.528986 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.529006 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.529009 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.552242 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.552274 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.552631 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.552650 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.709148 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.47964575s)
	I0127 14:19:19.709210 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.709226 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.709584 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.709606 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.709613 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.709610 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.709620 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.709911 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.709925 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.711462 1863329 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-309688 addons enable metrics-server
	
	I0127 14:19:19.712846 1863329 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0127 14:19:19.714093 1863329 addons.go:514] duration metric: took 2.567193619s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0127 14:19:19.714146 1863329 start.go:246] waiting for cluster config update ...
	I0127 14:19:19.714163 1863329 start.go:255] writing updated cluster config ...
	I0127 14:19:19.714515 1863329 ssh_runner.go:195] Run: rm -f paused
	I0127 14:19:19.771292 1863329 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:19:19.773125 1863329 out.go:177] * Done! kubectl is now configured to use "newest-cni-309688" cluster and "default" namespace by default
	I0127 14:19:16.407410 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:18.408328 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:20.905706 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:22.906390 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:25.405847 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:27.406081 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:29.406653 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:31.905101 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:33.906032 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:36.406416 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:38.905541 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:41.405451 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:43.405883 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:45.905497 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:47.905917 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:50.405296 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:52.405989 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:54.905953 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:56.906021 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:58.906598 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:01.405909 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:03.406128 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:05.906092 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:08.405216 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:10.405449 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:12.905583 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:14.399935 1860751 pod_ready.go:82] duration metric: took 4m0.000530283s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" ...
	E0127 14:20:14.399966 1860751 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 14:20:14.399992 1860751 pod_ready.go:39] duration metric: took 4m31.410913398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:14.400032 1860751 kubeadm.go:597] duration metric: took 5m29.594675564s to restartPrimaryControlPlane
	W0127 14:20:14.400141 1860751 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 14:20:14.400175 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 14:20:15.909704 1860751 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.509493932s)
	I0127 14:20:15.909782 1860751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:20:15.925857 1860751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:20:15.935803 1860751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:20:15.946508 1860751 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:20:15.946527 1860751 kubeadm.go:157] found existing configuration files:
	
	I0127 14:20:15.946566 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 14:20:15.956633 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:20:15.956690 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:20:15.966965 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 14:20:15.984740 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:20:15.984801 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:20:15.995541 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 14:20:16.005543 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:20:16.005605 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:20:16.015855 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 14:20:16.025594 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:20:16.025640 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:20:16.035989 1860751 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:20:16.197395 1860751 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:20:24.074171 1860751 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:20:24.074259 1860751 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:20:24.074369 1860751 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:20:24.074528 1860751 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:20:24.074657 1860751 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:20:24.074731 1860751 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:20:24.076292 1860751 out.go:235]   - Generating certificates and keys ...
	I0127 14:20:24.076373 1860751 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:20:24.076450 1860751 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:20:24.076532 1860751 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:20:24.076585 1860751 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:20:24.076644 1860751 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:20:24.076713 1860751 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:20:24.076800 1860751 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:20:24.076884 1860751 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:20:24.076992 1860751 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:20:24.077103 1860751 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:20:24.077169 1860751 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:20:24.077243 1860751 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:20:24.077289 1860751 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:20:24.077349 1860751 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:20:24.077397 1860751 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:20:24.077468 1860751 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:20:24.077537 1860751 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:20:24.077610 1860751 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:20:24.077669 1860751 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:20:24.078852 1860751 out.go:235]   - Booting up control plane ...
	I0127 14:20:24.078965 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:20:24.079055 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:20:24.079140 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:20:24.079285 1860751 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:20:24.079429 1860751 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:20:24.079489 1860751 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:20:24.079690 1860751 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:20:24.079833 1860751 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:20:24.079921 1860751 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.61135ms
	I0127 14:20:24.080007 1860751 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:20:24.080110 1860751 kubeadm.go:310] [api-check] The API server is healthy after 5.001239504s
	I0127 14:20:24.080256 1860751 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:20:24.080387 1860751 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:20:24.080441 1860751 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:20:24.080637 1860751 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-212529 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:20:24.080711 1860751 kubeadm.go:310] [bootstrap-token] Using token: pxjq5d.hk6ws8nooc0hkr03
	I0127 14:20:24.082018 1860751 out.go:235]   - Configuring RBAC rules ...
	I0127 14:20:24.082176 1860751 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:20:24.082314 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:20:24.082518 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:20:24.082703 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:20:24.082889 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:20:24.083015 1860751 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:20:24.083173 1860751 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:20:24.083250 1860751 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:20:24.083301 1860751 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:20:24.083311 1860751 kubeadm.go:310] 
	I0127 14:20:24.083396 1860751 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:20:24.083407 1860751 kubeadm.go:310] 
	I0127 14:20:24.083513 1860751 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:20:24.083522 1860751 kubeadm.go:310] 
	I0127 14:20:24.083558 1860751 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:20:24.083655 1860751 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:20:24.083734 1860751 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:20:24.083743 1860751 kubeadm.go:310] 
	I0127 14:20:24.083802 1860751 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:20:24.083810 1860751 kubeadm.go:310] 
	I0127 14:20:24.083852 1860751 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:20:24.083858 1860751 kubeadm.go:310] 
	I0127 14:20:24.083921 1860751 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:20:24.084043 1860751 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:20:24.084140 1860751 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:20:24.084149 1860751 kubeadm.go:310] 
	I0127 14:20:24.084263 1860751 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:20:24.084383 1860751 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:20:24.084400 1860751 kubeadm.go:310] 
	I0127 14:20:24.084497 1860751 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token pxjq5d.hk6ws8nooc0hkr03 \
	I0127 14:20:24.084584 1860751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e \
	I0127 14:20:24.084604 1860751 kubeadm.go:310] 	--control-plane 
	I0127 14:20:24.084610 1860751 kubeadm.go:310] 
	I0127 14:20:24.084679 1860751 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:20:24.084685 1860751 kubeadm.go:310] 
	I0127 14:20:24.084750 1860751 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pxjq5d.hk6ws8nooc0hkr03 \
	I0127 14:20:24.084894 1860751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e 
	I0127 14:20:24.084923 1860751 cni.go:84] Creating CNI manager for ""
	I0127 14:20:24.084937 1860751 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:20:24.086257 1860751 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:20:24.087300 1860751 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:20:24.097744 1860751 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:20:24.115867 1860751 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:20:24.115958 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:24.115962 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-212529 minikube.k8s.io/updated_at=2025_01_27T14_20_24_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=default-k8s-diff-port-212529 minikube.k8s.io/primary=true
	I0127 14:20:24.324045 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:24.324042 1860751 ops.go:34] apiserver oom_adj: -16
	I0127 14:20:24.824528 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:25.324196 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:25.824971 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:26.324285 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:26.825007 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:27.324812 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:27.824252 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:28.324496 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:28.413845 1860751 kubeadm.go:1113] duration metric: took 4.297974897s to wait for elevateKubeSystemPrivileges
	I0127 14:20:28.413890 1860751 kubeadm.go:394] duration metric: took 5m43.681075591s to StartCluster
	I0127 14:20:28.413911 1860751 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:20:28.414029 1860751 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:20:28.416135 1860751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:20:28.416434 1860751 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.145 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 14:20:28.416580 1860751 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:20:28.416710 1860751 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416715 1860751 config.go:182] Loaded profile config "default-k8s-diff-port-212529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:20:28.416736 1860751 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.416745 1860751 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:20:28.416742 1860751 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416759 1860751 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416785 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.416797 1860751 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.416807 1860751 addons.go:247] addon dashboard should already be in state true
	I0127 14:20:28.416843 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.417198 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417233 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.417240 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417275 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.416772 1860751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-212529"
	I0127 14:20:28.416777 1860751 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.417322 1860751 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.417337 1860751 addons.go:247] addon metrics-server should already be in state true
	I0127 14:20:28.417560 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.417900 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417916 1860751 out.go:177] * Verifying Kubernetes components...
	I0127 14:20:28.417955 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.417963 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.418005 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.419061 1860751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:20:28.434949 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
	I0127 14:20:28.435505 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.436082 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.436114 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.436521 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.436752 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.437523 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0127 14:20:28.437697 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I0127 14:20:28.438072 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.438417 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.438657 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.438682 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.438906 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.438929 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.439056 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.439281 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.439489 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39481
	I0127 14:20:28.439624 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.439660 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.439804 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.439846 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.439944 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.440409 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.440432 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.440811 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.441377 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.441420 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.441785 1860751 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.441804 1860751 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:20:28.441836 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.442074 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.442111 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.460558 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
	I0127 14:20:28.461043 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38533
	I0127 14:20:28.461200 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.461461 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.461725 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.461749 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.461814 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0127 14:20:28.462061 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.462083 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.462286 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.462330 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.462485 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.462605 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.462762 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.462775 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.462832 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.463228 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.463817 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.463862 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.464659 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.465253 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.466108 1860751 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:20:28.466667 1860751 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:20:28.467300 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:20:28.467316 1860751 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:20:28.467333 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.469055 1860751 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:20:28.469287 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38797
	I0127 14:20:28.469629 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.470009 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:20:28.470027 1860751 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:20:28.470055 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.470158 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.470180 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.470774 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.470967 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.471164 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.471781 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.471814 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.472153 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.472327 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.472488 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.472639 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.473502 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.473853 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.474311 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.474338 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.474488 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.474652 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.474805 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.474896 1860751 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:20:28.474964 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.475898 1860751 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:20:28.475916 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:20:28.475933 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.478521 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.478927 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.478950 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.479131 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.479325 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.479479 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.479622 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.482246 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I0127 14:20:28.482637 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.483047 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.483068 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.483409 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.483542 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.484999 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.485241 1860751 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:20:28.485259 1860751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:20:28.485276 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.488061 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.488402 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.488429 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.488581 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.488725 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.488858 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.489030 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.646865 1860751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:20:28.672532 1860751 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-212529" to be "Ready" ...
	I0127 14:20:28.703176 1860751 node_ready.go:49] node "default-k8s-diff-port-212529" has status "Ready":"True"
	I0127 14:20:28.703197 1860751 node_ready.go:38] duration metric: took 30.636379ms for node "default-k8s-diff-port-212529" to be "Ready" ...
	I0127 14:20:28.703206 1860751 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:28.710494 1860751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:28.817820 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:20:28.817849 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:20:28.837871 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:20:28.851072 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:20:28.851107 1860751 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:20:28.852529 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:20:28.858946 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:20:28.858978 1860751 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:20:28.897376 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:20:28.897409 1860751 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:20:28.944458 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:20:28.944489 1860751 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:20:28.996770 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:20:28.996799 1860751 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:20:29.041836 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:20:29.066199 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:20:29.066234 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:20:29.191066 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:20:29.191092 1860751 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:20:29.292937 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:20:29.292970 1860751 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:20:29.324574 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:20:29.324605 1860751 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:20:29.381589 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:20:29.381618 1860751 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:20:29.579396 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:20:29.579421 1860751 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:20:29.730806 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:20:30.332634 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.480056609s)
	I0127 14:20:30.332719 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.332740 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.332753 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.494842628s)
	I0127 14:20:30.332799 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.332812 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333060 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333080 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.333120 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.333128 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333246 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333271 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.333280 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.333287 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333331 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Closing plugin on server side
	I0127 14:20:30.333499 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333513 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.335273 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.335291 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.402574 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.402607 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.402929 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.402951 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.597814 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.555933063s)
	I0127 14:20:30.597873 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.597890 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.598223 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.598244 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.598254 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.598262 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.598523 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.598545 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.598558 1860751 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-212529"
	I0127 14:20:30.720235 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:31.251992 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.52112686s)
	I0127 14:20:31.252076 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:31.252099 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:31.252456 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:31.252477 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:31.252487 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:31.252495 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:31.252788 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:31.252797 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Closing plugin on server side
	I0127 14:20:31.252810 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:31.254461 1860751 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-212529 addons enable metrics-server
	
	I0127 14:20:31.255681 1860751 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 14:20:31.256922 1860751 addons.go:514] duration metric: took 2.840355251s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 14:20:33.216592 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:35.217244 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:37.731702 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.731733 1860751 pod_ready.go:82] duration metric: took 9.021206919s for pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.731747 1860751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.761047 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.761074 1860751 pod_ready.go:82] duration metric: took 29.318136ms for pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.761084 1860751 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.772463 1860751 pod_ready.go:93] pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.772491 1860751 pod_ready.go:82] duration metric: took 11.399303ms for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.772504 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.780269 1860751 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.780294 1860751 pod_ready.go:82] duration metric: took 7.782307ms for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.780306 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.785276 1860751 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.785304 1860751 pod_ready.go:82] duration metric: took 4.986421ms for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.785315 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5fmd" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.114939 1860751 pod_ready.go:93] pod "kube-proxy-f5fmd" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:38.114969 1860751 pod_ready.go:82] duration metric: took 329.644964ms for pod "kube-proxy-f5fmd" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.114981 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.515806 1860751 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:38.515832 1860751 pod_ready.go:82] duration metric: took 400.844808ms for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.515841 1860751 pod_ready.go:39] duration metric: took 9.812625577s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:38.515859 1860751 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:20:38.515918 1860751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:20:38.534333 1860751 api_server.go:72] duration metric: took 10.117851719s to wait for apiserver process to appear ...
	I0127 14:20:38.534364 1860751 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:20:38.534390 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:20:38.540410 1860751 api_server.go:279] https://192.168.50.145:8444/healthz returned 200:
	ok
	I0127 14:20:38.541651 1860751 api_server.go:141] control plane version: v1.32.1
	I0127 14:20:38.541674 1860751 api_server.go:131] duration metric: took 7.30288ms to wait for apiserver health ...
	I0127 14:20:38.541685 1860751 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:20:38.725366 1860751 system_pods.go:59] 9 kube-system pods found
	I0127 14:20:38.725397 1860751 system_pods.go:61] "coredns-668d6bf9bc-g77l4" [4457b047-3339-455e-ab06-15a1e4d7a95f] Running
	I0127 14:20:38.725402 1860751 system_pods.go:61] "coredns-668d6bf9bc-gwfcp" [d557581e-b74a-482d-9c8c-12e1b51d11d5] Running
	I0127 14:20:38.725406 1860751 system_pods.go:61] "etcd-default-k8s-diff-port-212529" [1e347129-845b-4c34-831c-e056cccc90f7] Running
	I0127 14:20:38.725410 1860751 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-212529" [1472d317-bd0d-4957-a955-d69eb5339d2a] Running
	I0127 14:20:38.725414 1860751 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-212529" [0e5e7440-7389-4bc8-9ee5-0e8041edef25] Running
	I0127 14:20:38.725417 1860751 system_pods.go:61] "kube-proxy-f5fmd" [a08f6d90-467b-4972-8c03-d62d07e108e5] Running
	I0127 14:20:38.725422 1860751 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-212529" [34188644-73d6-4567-856a-895cef0abac8] Running
	I0127 14:20:38.725431 1860751 system_pods.go:61] "metrics-server-f79f97bbb-gpkgd" [ec65f4da-1a84-4dab-9969-3ed09e9fdce2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:20:38.725436 1860751 system_pods.go:61] "storage-provisioner" [72ed4f2a-f894-4246-8596-b02befc5fde4] Running
	I0127 14:20:38.725448 1860751 system_pods.go:74] duration metric: took 183.756587ms to wait for pod list to return data ...
	I0127 14:20:38.725461 1860751 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:20:38.916064 1860751 default_sa.go:45] found service account: "default"
	I0127 14:20:38.916100 1860751 default_sa.go:55] duration metric: took 190.628425ms for default service account to be created ...
	I0127 14:20:38.916114 1860751 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:20:39.121453 1860751 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	4e12a41db7090       523cad1a4df73       24 seconds ago      Exited              dashboard-metrics-scraper   9                   b7ea4c9b57361       dashboard-metrics-scraper-86c6bf9756-gn6tj
	e89316ee54115       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   e5916a311dfe2       kubernetes-dashboard-7779f9b69b-9vnfn
	8564a8569f15d       c69fa2e9cbf5f       22 minutes ago      Running             coredns                     0                   ee19c1b73f8b7       coredns-668d6bf9bc-vn9c5
	02d05ad52d05c       c69fa2e9cbf5f       22 minutes ago      Running             coredns                     0                   a74ecf0c41ddc       coredns-668d6bf9bc-52k8k
	2679dfaab79eb       6e38f40d628db       22 minutes ago      Running             storage-provisioner         0                   75280b90129b5       storage-provisioner
	4f8f8d72b2d07       e29f9c7391fd9       22 minutes ago      Running             kube-proxy                  0                   0d4c8744a3479       kube-proxy-k2hsk
	c27254f84098d       a9e7e6b294baf       22 minutes ago      Running             etcd                        2                   ecf7616195575       etcd-embed-certs-635679
	a8744e9c18072       2b0d6572d062c       22 minutes ago      Running             kube-scheduler              2                   d39608420fc3a       kube-scheduler-embed-certs-635679
	07166050ad18d       95c0bda56fc4d       22 minutes ago      Running             kube-apiserver              2                   39b18375d7684       kube-apiserver-embed-certs-635679
	31762ba9c2652       019ee182b58e2       22 minutes ago      Running             kube-controller-manager     2                   2df21f429ac69       kube-controller-manager-embed-certs-635679
	
	
	==> containerd <==
	Jan 27 14:34:23 embed-certs-635679 containerd[557]: time="2025-01-27T14:34:23.433157398Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 14:34:23 embed-certs-635679 containerd[557]: time="2025-01-27T14:34:23.435132565Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 14:34:23 embed-certs-635679 containerd[557]: time="2025-01-27T14:34:23.435212103Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.430000400Z" level=info msg="CreateContainer within sandbox \"b7ea4c9b573618c10d13f34c3bae414e76a4629c47c0758826bf0242f75b3024\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.452965831Z" level=info msg="CreateContainer within sandbox \"b7ea4c9b573618c10d13f34c3bae414e76a4629c47c0758826bf0242f75b3024\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5\""
	Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.455361289Z" level=info msg="StartContainer for \"0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5\""
	Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.540154441Z" level=info msg="StartContainer for \"0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5\" returns successfully"
	Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.577914291Z" level=info msg="shim disconnected" id=0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5 namespace=k8s.io
	Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.578044697Z" level=warning msg="cleaning up after shim disconnected" id=0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5 namespace=k8s.io
	Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.578083575Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.931304002Z" level=info msg="RemoveContainer for \"bdbc010c8f3f227025ff006fd718ab05bf4d2719e86be8eb09db45e97b58a869\""
	Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.936656407Z" level=info msg="RemoveContainer for \"bdbc010c8f3f227025ff006fd718ab05bf4d2719e86be8eb09db45e97b58a869\" returns successfully"
	Jan 27 14:39:28 embed-certs-635679 containerd[557]: time="2025-01-27T14:39:28.428932890Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 14:39:28 embed-certs-635679 containerd[557]: time="2025-01-27T14:39:28.438738645Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 14:39:28 embed-certs-635679 containerd[557]: time="2025-01-27T14:39:28.440721799Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 14:39:28 embed-certs-635679 containerd[557]: time="2025-01-27T14:39:28.440843069Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.431890104Z" level=info msg="CreateContainer within sandbox \"b7ea4c9b573618c10d13f34c3bae414e76a4629c47c0758826bf0242f75b3024\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.455806626Z" level=info msg="CreateContainer within sandbox \"b7ea4c9b573618c10d13f34c3bae414e76a4629c47c0758826bf0242f75b3024\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a\""
	Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.456950499Z" level=info msg="StartContainer for \"4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a\""
	Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.553014507Z" level=info msg="StartContainer for \"4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a\" returns successfully"
	Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.591937618Z" level=info msg="shim disconnected" id=4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a namespace=k8s.io
	Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.592111341Z" level=warning msg="cleaning up after shim disconnected" id=4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a namespace=k8s.io
	Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.592356575Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.635612210Z" level=info msg="RemoveContainer for \"0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5\""
	Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.650069374Z" level=info msg="RemoveContainer for \"0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5\" returns successfully"
	
	
	==> coredns [02d05ad52d05c752b9f96e3e4a9586474fabc31fe8aa2f02fa2e8320c6726089] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8564a8569f15d671ca3ca1e9ad223e5c79149b078c634392de765621ba53192e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-635679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-635679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d
	                    minikube.k8s.io/name=embed-certs-635679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T14_18_19_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 14:18:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-635679
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:40:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:39:13 +0000   Mon, 27 Jan 2025 14:18:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:39:13 +0000   Mon, 27 Jan 2025 14:18:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:39:13 +0000   Mon, 27 Jan 2025 14:18:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:39:13 +0000   Mon, 27 Jan 2025 14:18:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.180
	  Hostname:    embed-certs-635679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 059fb273da1b414b9b09f7893653fab6
	  System UUID:                059fb273-da1b-414b-9b09-f7893653fab6
	  Boot ID:                    153d3165-7d8f-4e48-9390-146221d081a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-52k8k                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 coredns-668d6bf9bc-vn9c5                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-embed-certs-635679                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-embed-certs-635679             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-embed-certs-635679    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-k2hsk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-embed-certs-635679             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-f79f97bbb-7xqnn                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-gn6tj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-9vnfn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-635679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-635679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-635679 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-635679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-635679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-635679 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-635679 event: Registered Node embed-certs-635679 in Controller
	
	
	==> dmesg <==
	[  +0.052702] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039469] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.832448] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.028315] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.567880] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.329352] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +0.066335] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060068] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
	[  +0.173743] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.129967] systemd-fstab-generator[517]: Ignoring "noauto" option for root device
	[  +0.267080] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +1.043321] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +2.681816] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +0.861542] kauditd_printk_skb: 225 callbacks suppressed
	[  +5.528743] kauditd_printk_skb: 74 callbacks suppressed
	[Jan27 14:14] kauditd_printk_skb: 50 callbacks suppressed
	[Jan27 14:18] systemd-fstab-generator[3013]: Ignoring "noauto" option for root device
	[  +9.071293] systemd-fstab-generator[3380]: Ignoring "noauto" option for root device
	[  +0.097025] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.368821] systemd-fstab-generator[3479]: Ignoring "noauto" option for root device
	[  +0.131454] kauditd_printk_skb: 12 callbacks suppressed
	[  +9.626612] kauditd_printk_skb: 112 callbacks suppressed
	[  +5.388176] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [c27254f84098d782fe3765ecd61ecd61651516518cdbb5be2f10ad3ed25f830d] <==
	{"level":"info","ts":"2025-01-27T14:18:16.664282Z","caller":"traceutil/trace.go:171","msg":"trace[1914227506] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:endpointslice-controller; range_end:; response_count:0; response_revision:192; }","duration":"118.019577ms","start":"2025-01-27T14:18:16.546238Z","end":"2025-01-27T14:18:16.664257Z","steps":["trace[1914227506] 'range keys from in-memory index tree'  (duration: 117.005003ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:18:16.839650Z","caller":"traceutil/trace.go:171","msg":"trace[359175703] transaction","detail":"{read_only:false; response_revision:194; number_of_response:1; }","duration":"104.245905ms","start":"2025-01-27T14:18:16.735388Z","end":"2025-01-27T14:18:16.839633Z","steps":["trace[359175703] 'process raft request'  (duration: 60.823424ms)","trace[359175703] 'compare'  (duration: 43.31828ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:18:17.101002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.858603ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5960677556645753543 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:expand-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:expand-controller\" value_size:655 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T14:18:17.101148Z","caller":"traceutil/trace.go:171","msg":"trace[1496340768] transaction","detail":"{read_only:false; response_revision:195; number_of_response:1; }","duration":"257.045679ms","start":"2025-01-27T14:18:16.844087Z","end":"2025-01-27T14:18:17.101133Z","steps":["trace[1496340768] 'process raft request'  (duration: 116.990703ms)","trace[1496340768] 'compare'  (duration: 139.656308ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:18:17.355387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.281034ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5960677556645753547 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:generic-garbage-collector\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:generic-garbage-collector\" value_size:679 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T14:18:17.356216Z","caller":"traceutil/trace.go:171","msg":"trace[1591653030] transaction","detail":"{read_only:false; response_revision:197; number_of_response:1; }","duration":"188.255255ms","start":"2025-01-27T14:18:17.167946Z","end":"2025-01-27T14:18:17.356202Z","steps":["trace[1591653030] 'process raft request'  (duration: 59.11668ms)","trace[1591653030] 'compare'  (duration: 128.067168ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:18:32.815049Z","caller":"traceutil/trace.go:171","msg":"trace[1609911900] transaction","detail":"{read_only:false; response_revision:516; number_of_response:1; }","duration":"491.055422ms","start":"2025-01-27T14:18:32.323214Z","end":"2025-01-27T14:18:32.814270Z","steps":["trace[1609911900] 'process raft request'  (duration: 490.228832ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:18:32.815117Z","caller":"traceutil/trace.go:171","msg":"trace[1629388984] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:530; }","duration":"459.925355ms","start":"2025-01-27T14:18:32.354052Z","end":"2025-01-27T14:18:32.813978Z","steps":["trace[1629388984] 'read index received'  (duration: 459.070682ms)","trace[1629388984] 'applied index is now lower than readState.Index'  (duration: 854.047µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:18:32.815220Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"461.154868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-vn9c5\" limit:1 ","response":"range_response_count:1 size:5089"}
	{"level":"info","ts":"2025-01-27T14:18:32.816256Z","caller":"traceutil/trace.go:171","msg":"trace[458862632] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-vn9c5; range_end:; response_count:1; response_revision:516; }","duration":"462.214627ms","start":"2025-01-27T14:18:32.354005Z","end":"2025-01-27T14:18:32.816219Z","steps":["trace[458862632] 'agreement among raft nodes before linearized reading'  (duration: 461.119042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:18:32.816307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:18:32.353989Z","time spent":"462.295735ms","remote":"127.0.0.1:43614","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":5112,"request content":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-vn9c5\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T14:18:32.817683Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:18:32.323188Z","time spent":"492.956313ms","remote":"127.0.0.1:43596","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:515 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-01-27T14:18:37.550999Z","caller":"traceutil/trace.go:171","msg":"trace[960790695] transaction","detail":"{read_only:false; response_revision:535; number_of_response:1; }","duration":"118.341864ms","start":"2025-01-27T14:18:37.432645Z","end":"2025-01-27T14:18:37.550987Z","steps":["trace[960790695] 'process raft request'  (duration: 117.893853ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:18:37.550655Z","caller":"traceutil/trace.go:171","msg":"trace[1211411304] linearizableReadLoop","detail":"{readStateIndex:551; appliedIndex:550; }","duration":"109.057389ms","start":"2025-01-27T14:18:37.441580Z","end":"2025-01-27T14:18:37.550638Z","steps":["trace[1211411304] 'read index received'  (duration: 108.895203ms)","trace[1211411304] 'applied index is now lower than readState.Index'  (duration: 161.668µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:18:37.551366Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.763911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-f79f97bbb-7xqnn.181e926dcb3ea080\" limit:1 ","response":"range_response_count:1 size:816"}
	{"level":"info","ts":"2025-01-27T14:18:37.551397Z","caller":"traceutil/trace.go:171","msg":"trace[763898449] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-f79f97bbb-7xqnn.181e926dcb3ea080; range_end:; response_count:1; response_revision:535; }","duration":"109.831038ms","start":"2025-01-27T14:18:37.441555Z","end":"2025-01-27T14:18:37.551386Z","steps":["trace[763898449] 'agreement among raft nodes before linearized reading'  (duration: 109.751773ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:28:11.119641Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":862}
	{"level":"info","ts":"2025-01-27T14:28:11.150038Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":862,"took":"28.958378ms","hash":1743789304,"current-db-size-bytes":2732032,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2732032,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2025-01-27T14:28:11.150287Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1743789304,"revision":862,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T14:33:11.126237Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1114}
	{"level":"info","ts":"2025-01-27T14:33:11.130952Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1114,"took":"3.791102ms","hash":2148902225,"current-db-size-bytes":2732032,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1720320,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T14:33:11.131009Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2148902225,"revision":1114,"compact-revision":862}
	{"level":"info","ts":"2025-01-27T14:38:11.132734Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1365}
	{"level":"info","ts":"2025-01-27T14:38:11.136880Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1365,"took":"3.588783ms","hash":1689589143,"current-db-size-bytes":2732032,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1740800,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T14:38:11.136931Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1689589143,"revision":1365,"compact-revision":1114}
	
	
	==> kernel <==
	 14:40:33 up 27 min,  0 users,  load average: 0.54, 0.32, 0.20
	Linux embed-certs-635679 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [07166050ad18d63f7fef1538dc5d308e0c070f26157a049882568876590f1878] <==
	I0127 14:36:14.157532       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 14:36:14.158713       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 14:38:13.154661       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:38:13.155210       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 14:38:14.157108       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:38:14.157374       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 14:38:14.157589       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:38:14.157743       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 14:38:14.159004       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 14:38:14.159083       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 14:39:14.159882       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:39:14.160016       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 14:39:14.159887       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:39:14.160146       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 14:39:14.161356       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 14:39:14.161403       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [31762ba9c2652717502adc70a3218a8ce2c8cf94ccdacf92cd0e0351fbd946b7] <==
	E0127 14:35:53.011711       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:35:53.068344       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:36:23.017633       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:36:23.076409       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:36:53.023468       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:36:53.084482       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:37:23.029445       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:37:23.091113       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:37:53.036733       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:37:53.099288       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:38:23.043820       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:38:23.108541       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:38:53.050599       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:38:53.115558       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 14:39:13.385113       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-635679"
	E0127 14:39:23.057421       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:39:23.126144       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 14:39:42.444001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="320.114µs"
	E0127 14:39:53.064557       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:39:53.132668       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 14:39:57.444814       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="133.069µs"
	I0127 14:40:08.646485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="129.38µs"
	I0127 14:40:16.043484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="49.421µs"
	E0127 14:40:23.071818       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:40:23.141260       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [4f8f8d72b2d07e8332023515af728edde6a649254bf14d8c2d86d5bdabe977e8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:18:24.775570       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:18:24.793352       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.180"]
	E0127 14:18:24.793481       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:18:24.871245       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:18:24.871300       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:18:24.871324       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:18:24.873840       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:18:24.874110       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:18:24.874136       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:18:24.875892       1 config.go:199] "Starting service config controller"
	I0127 14:18:24.875939       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:18:24.875976       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:18:24.875981       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:18:24.876667       1 config.go:329] "Starting node config controller"
	I0127 14:18:24.876697       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:18:24.976223       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:18:24.976278       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:18:24.977586       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a8744e9c180727296fd4ba21b613d2a9d24ba24eaa8f0f5e22a78aca756ef1c7] <==
	W0127 14:18:14.514894       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 14:18:14.514927       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:14.545430       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 14:18:14.545480       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:14.611082       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 14:18:14.611185       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:14.620292       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 14:18:14.620512       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 14:18:14.686628       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 14:18:14.686900       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:14.694857       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 14:18:14.695087       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:14.696908       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 14:18:14.696950       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:14.702295       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 14:18:14.702317       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:14.853747       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 14:18:14.854077       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:14.866706       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 14:18:14.866994       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:14.881864       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 14:18:14.882119       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:16.189117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 14:18:16.189165       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 14:18:17.764462       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:39:28 embed-certs-635679 kubelet[3387]: E0127 14:39:28.440992    3387 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 14:39:28 embed-certs-635679 kubelet[3387]: E0127 14:39:28.441135    3387 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 14:39:28 embed-certs-635679 kubelet[3387]: E0127 14:39:28.441463    3387 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r4q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-7xqnn_kube-system(2fae80e8-5118-461e-b160-d384bf083f0f): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 14:39:28 embed-certs-635679 kubelet[3387]: E0127 14:39:28.442990    3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-7xqnn" podUID="2fae80e8-5118-461e-b160-d384bf083f0f"
	Jan 27 14:39:41 embed-certs-635679 kubelet[3387]: I0127 14:39:41.425052    3387 scope.go:117] "RemoveContainer" containerID="0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5"
	Jan 27 14:39:41 embed-certs-635679 kubelet[3387]: E0127 14:39:41.426126    3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gn6tj_kubernetes-dashboard(0701808a-6bbc-4551-9fb3-3f5236257073)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gn6tj" podUID="0701808a-6bbc-4551-9fb3-3f5236257073"
	Jan 27 14:39:42 embed-certs-635679 kubelet[3387]: E0127 14:39:42.425918    3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-7xqnn" podUID="2fae80e8-5118-461e-b160-d384bf083f0f"
	Jan 27 14:39:55 embed-certs-635679 kubelet[3387]: I0127 14:39:55.425731    3387 scope.go:117] "RemoveContainer" containerID="0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5"
	Jan 27 14:39:55 embed-certs-635679 kubelet[3387]: E0127 14:39:55.425987    3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gn6tj_kubernetes-dashboard(0701808a-6bbc-4551-9fb3-3f5236257073)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gn6tj" podUID="0701808a-6bbc-4551-9fb3-3f5236257073"
	Jan 27 14:39:57 embed-certs-635679 kubelet[3387]: E0127 14:39:57.426414    3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-7xqnn" podUID="2fae80e8-5118-461e-b160-d384bf083f0f"
	Jan 27 14:40:08 embed-certs-635679 kubelet[3387]: I0127 14:40:08.428129    3387 scope.go:117] "RemoveContainer" containerID="0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5"
	Jan 27 14:40:08 embed-certs-635679 kubelet[3387]: E0127 14:40:08.428844    3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-7xqnn" podUID="2fae80e8-5118-461e-b160-d384bf083f0f"
	Jan 27 14:40:08 embed-certs-635679 kubelet[3387]: I0127 14:40:08.627900    3387 scope.go:117] "RemoveContainer" containerID="0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5"
	Jan 27 14:40:08 embed-certs-635679 kubelet[3387]: I0127 14:40:08.628355    3387 scope.go:117] "RemoveContainer" containerID="4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a"
	Jan 27 14:40:08 embed-certs-635679 kubelet[3387]: E0127 14:40:08.628581    3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gn6tj_kubernetes-dashboard(0701808a-6bbc-4551-9fb3-3f5236257073)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gn6tj" podUID="0701808a-6bbc-4551-9fb3-3f5236257073"
	Jan 27 14:40:16 embed-certs-635679 kubelet[3387]: I0127 14:40:16.028518    3387 scope.go:117] "RemoveContainer" containerID="4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a"
	Jan 27 14:40:16 embed-certs-635679 kubelet[3387]: E0127 14:40:16.028691    3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gn6tj_kubernetes-dashboard(0701808a-6bbc-4551-9fb3-3f5236257073)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gn6tj" podUID="0701808a-6bbc-4551-9fb3-3f5236257073"
	Jan 27 14:40:18 embed-certs-635679 kubelet[3387]: E0127 14:40:18.441960    3387 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 14:40:18 embed-certs-635679 kubelet[3387]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 14:40:18 embed-certs-635679 kubelet[3387]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 14:40:18 embed-certs-635679 kubelet[3387]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 14:40:18 embed-certs-635679 kubelet[3387]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 14:40:22 embed-certs-635679 kubelet[3387]: E0127 14:40:22.426510    3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-7xqnn" podUID="2fae80e8-5118-461e-b160-d384bf083f0f"
	Jan 27 14:40:28 embed-certs-635679 kubelet[3387]: I0127 14:40:28.425959    3387 scope.go:117] "RemoveContainer" containerID="4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a"
	Jan 27 14:40:28 embed-certs-635679 kubelet[3387]: E0127 14:40:28.426132    3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gn6tj_kubernetes-dashboard(0701808a-6bbc-4551-9fb3-3f5236257073)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gn6tj" podUID="0701808a-6bbc-4551-9fb3-3f5236257073"
	
	
	==> kubernetes-dashboard [e89316ee54115ff814681a1206060ff283df15367f524eac34bd68ee628d2bf4] <==
	2025/01/27 14:28:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:28:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:29:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:29:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:30:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:30:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:31:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:31:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:32:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:32:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:33:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:33:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:34:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:34:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:35:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:35:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:36:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:36:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:37:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:37:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:38:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:38:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:39:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:39:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:40:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2679dfaab79eb703b9951ce9d7b7994254f7d475f6890c525e36e5fc8a5ee306] <==
	I0127 14:18:26.002205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:18:26.050084       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:18:26.056067       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:18:26.089361       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:18:26.089546       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-635679_b2f2fbb4-6ed8-4dd8-9e94-5065f87dcffe!
	I0127 14:18:26.090621       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a68f20c-7a75-4920-9933-5237c6d16c12", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-635679_b2f2fbb4-6ed8-4dd8-9e94-5065f87dcffe became leader
	I0127 14:18:26.495046       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-635679_b2f2fbb4-6ed8-4dd8-9e94-5065f87dcffe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-635679 -n embed-certs-635679
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-635679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-7xqnn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-635679 describe pod metrics-server-f79f97bbb-7xqnn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-635679 describe pod metrics-server-f79f97bbb-7xqnn: exit status 1 (62.734928ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-7xqnn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-635679 describe pod metrics-server-f79f97bbb-7xqnn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1633.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (1589.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-591346 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 14:13:45.117940 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:50.784026 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:50.790469 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:50.801888 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:50.823315 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:50.864792 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:50.946479 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:51.108248 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:51.430039 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:52.071993 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:53.353716 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:55.916017 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:01.037874 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:11.279757 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:12.646014 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:12.652424 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:12.663789 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:12.685158 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:12.726569 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:12.808048 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:12.970185 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:13.291986 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:13.934333 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:15.215789 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:17.777285 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-591346 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m27.13317116s)

                                                
                                                
-- stdout --
	* [no-preload-591346] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-591346" primary control-plane node in "no-preload-591346" cluster
	* Restarting existing kvm2 VM for "no-preload-591346" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-591346 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:13:39.736270 1860441 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:13:39.736445 1860441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:13:39.736460 1860441 out.go:358] Setting ErrFile to fd 2...
	I0127 14:13:39.736467 1860441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:13:39.736774 1860441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 14:13:39.737609 1860441 out.go:352] Setting JSON to false
	I0127 14:13:39.738875 1860441 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":39361,"bootTime":1737947859,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:13:39.739002 1860441 start.go:139] virtualization: kvm guest
	I0127 14:13:39.741173 1860441 out.go:177] * [no-preload-591346] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:13:39.742969 1860441 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:13:39.742976 1860441 notify.go:220] Checking for updates...
	I0127 14:13:39.745370 1860441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:13:39.746535 1860441 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:13:39.747565 1860441 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 14:13:39.748630 1860441 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:13:39.749733 1860441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:13:39.751267 1860441 config.go:182] Loaded profile config "no-preload-591346": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:13:39.751746 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:13:39.751813 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:13:39.766904 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42299
	I0127 14:13:39.767373 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:13:39.767996 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:13:39.768020 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:13:39.768370 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:13:39.768587 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:13:39.768843 1860441 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:13:39.769214 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:13:39.769258 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:13:39.783825 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0127 14:13:39.784164 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:13:39.784589 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:13:39.784610 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:13:39.784869 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:13:39.785066 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:13:39.820081 1860441 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:13:39.821391 1860441 start.go:297] selected driver: kvm2
	I0127 14:13:39.821414 1860441 start.go:901] validating driver "kvm2" against &{Name:no-preload-591346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-591346 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:13:39.821575 1860441 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:13:39.822705 1860441 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:39.822831 1860441 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-1798877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:13:39.838709 1860441 install.go:137] /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:13:39.839132 1860441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:13:39.839172 1860441 cni.go:84] Creating CNI manager for ""
	I0127 14:13:39.839227 1860441 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:13:39.839278 1860441 start.go:340] cluster config:
	{Name:no-preload-591346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-591346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:13:39.839401 1860441 iso.go:125] acquiring lock: {Name:mk3326e4e64b9d95edc1453384276c21a2957c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:39.841280 1860441 out.go:177] * Starting "no-preload-591346" primary control-plane node in "no-preload-591346" cluster
	I0127 14:13:39.842475 1860441 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:13:39.842601 1860441 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/no-preload-591346/config.json ...
	I0127 14:13:39.842726 1860441 cache.go:107] acquiring lock: {Name:mka61000415987aee83a406bd8d4053902c12f76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:39.842729 1860441 cache.go:107] acquiring lock: {Name:mkbf11fed31baefb26be7ef9e2b997332b03307a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:39.842775 1860441 cache.go:107] acquiring lock: {Name:mk781cbb36cac27eecace8a6a69490330f8870f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:39.842721 1860441 cache.go:107] acquiring lock: {Name:mkf19572f0cbc73c7ce7313169747a25259e4658 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:39.842859 1860441 cache.go:115] /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 14:13:39.842885 1860441 cache.go:115] /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 14:13:39.842889 1860441 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 169.33µs
	I0127 14:13:39.842941 1860441 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 14:13:39.842914 1860441 cache.go:107] acquiring lock: {Name:mk8e6e19324a0f9dff9b6f6bd82a4eb9dd4f2b48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:39.842946 1860441 cache.go:107] acquiring lock: {Name:mke71a0f88cb847a6019e9cbc67dcc2db6ca6ef3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:39.842864 1860441 cache.go:107] acquiring lock: {Name:mkd5c28ca690d9569abaf42f6dde22226e49538e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:39.842847 1860441 start.go:360] acquireMachinesLock for no-preload-591346: {Name:mk6fcac41a7a21b211b65e56994e625852d1a781 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:13:39.842898 1860441 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 183.361µs
	I0127 14:13:39.843030 1860441 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 14:13:39.843001 1860441 cache.go:107] acquiring lock: {Name:mkaa168d6fd954982cf74ba479eb01d8cf527be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:39.843064 1860441 cache.go:115] /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 14:13:39.843071 1860441 cache.go:115] /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 14:13:39.843067 1860441 cache.go:115] /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 14:13:39.843081 1860441 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 183.356µs
	I0127 14:13:39.843084 1860441 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 258.243µs
	I0127 14:13:39.843088 1860441 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 233.516µs
	I0127 14:13:39.843095 1860441 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 14:13:39.843097 1860441 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 14:13:39.843100 1860441 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 14:13:39.842919 1860441 cache.go:115] /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 14:13:39.843119 1860441 cache.go:115] /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 14:13:39.843138 1860441 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 182.087µs
	I0127 14:13:39.843157 1860441 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 14:13:39.843121 1860441 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 411.265µs
	I0127 14:13:39.843168 1860441 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 14:13:39.842923 1860441 cache.go:115] /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 14:13:39.843181 1860441 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 408.095µs
	I0127 14:13:39.843189 1860441 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 14:13:39.843197 1860441 cache.go:87] Successfully saved all images to host disk.
	I0127 14:13:40.299736 1860441 start.go:364] duration metric: took 456.685895ms to acquireMachinesLock for "no-preload-591346"
	I0127 14:13:40.299793 1860441 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:13:40.299803 1860441 fix.go:54] fixHost starting: 
	I0127 14:13:40.300289 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:13:40.300347 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:13:40.317779 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0127 14:13:40.318242 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:13:40.318847 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:13:40.318878 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:13:40.319384 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:13:40.319601 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:13:40.319762 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetState
	I0127 14:13:40.321393 1860441 fix.go:112] recreateIfNeeded on no-preload-591346: state=Stopped err=<nil>
	I0127 14:13:40.321419 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	W0127 14:13:40.321589 1860441 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:13:40.323834 1860441 out.go:177] * Restarting existing kvm2 VM for "no-preload-591346" ...
	I0127 14:13:40.325058 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Start
	I0127 14:13:40.325253 1860441 main.go:141] libmachine: (no-preload-591346) starting domain...
	I0127 14:13:40.325272 1860441 main.go:141] libmachine: (no-preload-591346) ensuring networks are active...
	I0127 14:13:40.326013 1860441 main.go:141] libmachine: (no-preload-591346) Ensuring network default is active
	I0127 14:13:40.326406 1860441 main.go:141] libmachine: (no-preload-591346) Ensuring network mk-no-preload-591346 is active
	I0127 14:13:40.326851 1860441 main.go:141] libmachine: (no-preload-591346) getting domain XML...
	I0127 14:13:40.327690 1860441 main.go:141] libmachine: (no-preload-591346) creating domain...
	I0127 14:13:41.585503 1860441 main.go:141] libmachine: (no-preload-591346) waiting for IP...
	I0127 14:13:41.586492 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:41.587018 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:41.587120 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:41.587010 1860476 retry.go:31] will retry after 235.233838ms: waiting for domain to come up
	I0127 14:13:41.823606 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:41.824268 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:41.824324 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:41.824252 1860476 retry.go:31] will retry after 368.945815ms: waiting for domain to come up
	I0127 14:13:42.195087 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:42.195680 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:42.195710 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:42.195651 1860476 retry.go:31] will retry after 411.3149ms: waiting for domain to come up
	I0127 14:13:42.609116 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:42.609756 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:42.609800 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:42.609677 1860476 retry.go:31] will retry after 456.459551ms: waiting for domain to come up
	I0127 14:13:43.068283 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:43.068852 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:43.068891 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:43.068799 1860476 retry.go:31] will retry after 636.140781ms: waiting for domain to come up
	I0127 14:13:43.706826 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:43.707435 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:43.707467 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:43.707380 1860476 retry.go:31] will retry after 745.192816ms: waiting for domain to come up
	I0127 14:13:44.454361 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:44.454889 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:44.454917 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:44.454852 1860476 retry.go:31] will retry after 1.00860738s: waiting for domain to come up
	I0127 14:13:45.465521 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:45.466141 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:45.466174 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:45.466102 1860476 retry.go:31] will retry after 1.461377204s: waiting for domain to come up
	I0127 14:13:46.928945 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:46.929487 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:46.929517 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:46.929453 1860476 retry.go:31] will retry after 1.650417602s: waiting for domain to come up
	I0127 14:13:48.581120 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:48.581619 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:48.581648 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:48.581578 1860476 retry.go:31] will retry after 1.87923949s: waiting for domain to come up
	I0127 14:13:50.463232 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:50.463924 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:50.463960 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:50.463870 1860476 retry.go:31] will retry after 1.913972431s: waiting for domain to come up
	I0127 14:13:52.379932 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:52.380324 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:52.380366 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:52.380308 1860476 retry.go:31] will retry after 2.540578213s: waiting for domain to come up
	I0127 14:13:54.923833 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:54.924269 1860441 main.go:141] libmachine: (no-preload-591346) DBG | unable to find current IP address of domain no-preload-591346 in network mk-no-preload-591346
	I0127 14:13:54.924300 1860441 main.go:141] libmachine: (no-preload-591346) DBG | I0127 14:13:54.924226 1860476 retry.go:31] will retry after 2.955316717s: waiting for domain to come up
	I0127 14:13:57.882611 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:57.883047 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has current primary IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:57.883066 1860441 main.go:141] libmachine: (no-preload-591346) found domain IP: 192.168.39.238
	I0127 14:13:57.883079 1860441 main.go:141] libmachine: (no-preload-591346) reserving static IP address...
	I0127 14:13:57.883498 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "no-preload-591346", mac: "52:54:00:50:46:74", ip: "192.168.39.238"} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:57.883539 1860441 main.go:141] libmachine: (no-preload-591346) DBG | skip adding static IP to network mk-no-preload-591346 - found existing host DHCP lease matching {name: "no-preload-591346", mac: "52:54:00:50:46:74", ip: "192.168.39.238"}
	I0127 14:13:57.883551 1860441 main.go:141] libmachine: (no-preload-591346) reserved static IP address 192.168.39.238 for domain no-preload-591346
	I0127 14:13:57.883565 1860441 main.go:141] libmachine: (no-preload-591346) waiting for SSH...
	I0127 14:13:57.883579 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Getting to WaitForSSH function...
	I0127 14:13:57.885706 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:57.886024 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:57.886059 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:57.886192 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Using SSH client type: external
	I0127 14:13:57.886216 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/no-preload-591346/id_rsa (-rw-------)
	I0127 14:13:57.886266 1860441 main.go:141] libmachine: (no-preload-591346) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/no-preload-591346/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:13:57.886289 1860441 main.go:141] libmachine: (no-preload-591346) DBG | About to run SSH command:
	I0127 14:13:57.886318 1860441 main.go:141] libmachine: (no-preload-591346) DBG | exit 0
	I0127 14:13:58.010411 1860441 main.go:141] libmachine: (no-preload-591346) DBG | SSH cmd err, output: <nil>: 
	I0127 14:13:58.010884 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetConfigRaw
	I0127 14:13:58.011634 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetIP
	I0127 14:13:58.014279 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.014610 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.014641 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.014877 1860441 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/no-preload-591346/config.json ...
	I0127 14:13:58.015083 1860441 machine.go:93] provisionDockerMachine start ...
	I0127 14:13:58.015103 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:13:58.015365 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:13:58.017589 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.017953 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.017989 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.018085 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:13:58.018280 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.018445 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.018591 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:13:58.018778 1860441 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:58.019008 1860441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0127 14:13:58.019018 1860441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:13:58.126714 1860441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 14:13:58.126774 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetMachineName
	I0127 14:13:58.127032 1860441 buildroot.go:166] provisioning hostname "no-preload-591346"
	I0127 14:13:58.127064 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetMachineName
	I0127 14:13:58.127287 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:13:58.130061 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.130416 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.130436 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.130585 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:13:58.130794 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.130969 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.131090 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:13:58.131241 1860441 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:58.131425 1860441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0127 14:13:58.131436 1860441 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-591346 && echo "no-preload-591346" | sudo tee /etc/hostname
	I0127 14:13:58.252185 1860441 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-591346
	
	I0127 14:13:58.252219 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:13:58.255243 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.255703 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.255742 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.255954 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:13:58.256137 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.256273 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.256373 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:13:58.256580 1860441 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:58.256778 1860441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0127 14:13:58.256793 1860441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-591346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-591346/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-591346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:13:58.373522 1860441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:13:58.373574 1860441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-1798877/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-1798877/.minikube}
	I0127 14:13:58.373603 1860441 buildroot.go:174] setting up certificates
	I0127 14:13:58.373628 1860441 provision.go:84] configureAuth start
	I0127 14:13:58.373642 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetMachineName
	I0127 14:13:58.373908 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetIP
	I0127 14:13:58.376960 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.377356 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.377382 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.377580 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:13:58.380041 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.380426 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.380465 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.380569 1860441 provision.go:143] copyHostCerts
	I0127 14:13:58.380634 1860441 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem, removing ...
	I0127 14:13:58.380658 1860441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem
	I0127 14:13:58.380727 1860441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem (1078 bytes)
	I0127 14:13:58.380876 1860441 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem, removing ...
	I0127 14:13:58.380891 1860441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem
	I0127 14:13:58.380923 1860441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem (1123 bytes)
	I0127 14:13:58.381027 1860441 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem, removing ...
	I0127 14:13:58.381037 1860441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem
	I0127 14:13:58.381074 1860441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem (1675 bytes)
	I0127 14:13:58.381164 1860441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem org=jenkins.no-preload-591346 san=[127.0.0.1 192.168.39.238 localhost minikube no-preload-591346]
	I0127 14:13:58.522019 1860441 provision.go:177] copyRemoteCerts
	I0127 14:13:58.522073 1860441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:13:58.522100 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:13:58.524877 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.525292 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.525328 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.525622 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:13:58.525836 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.526027 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:13:58.526201 1860441 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/no-preload-591346/id_rsa Username:docker}
	I0127 14:13:58.609473 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:13:58.633327 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 14:13:58.656780 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:13:58.680610 1860441 provision.go:87] duration metric: took 306.963475ms to configureAuth
	I0127 14:13:58.680643 1860441 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:13:58.680820 1860441 config.go:182] Loaded profile config "no-preload-591346": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:13:58.680833 1860441 machine.go:96] duration metric: took 665.738127ms to provisionDockerMachine
	I0127 14:13:58.680841 1860441 start.go:293] postStartSetup for "no-preload-591346" (driver="kvm2")
	I0127 14:13:58.680870 1860441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:13:58.680899 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:13:58.681251 1860441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:13:58.681285 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:13:58.684301 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.684794 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.684817 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.685008 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:13:58.685182 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.685308 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:13:58.685412 1860441 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/no-preload-591346/id_rsa Username:docker}
	I0127 14:13:58.769290 1860441 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:13:58.773122 1860441 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:13:58.773150 1860441 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/addons for local assets ...
	I0127 14:13:58.773211 1860441 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/files for local assets ...
	I0127 14:13:58.773289 1860441 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem -> 18060702.pem in /etc/ssl/certs
	I0127 14:13:58.773382 1860441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:13:58.782713 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:13:58.804175 1860441 start.go:296] duration metric: took 123.316183ms for postStartSetup
	I0127 14:13:58.804228 1860441 fix.go:56] duration metric: took 18.504426284s for fixHost
	I0127 14:13:58.804257 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:13:58.807191 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.807615 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.807665 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.807761 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:13:58.807976 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.808126 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.808301 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:13:58.808466 1860441 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:58.808705 1860441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0127 14:13:58.808719 1860441 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:13:58.915180 1860441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987238.874533334
	
	I0127 14:13:58.915207 1860441 fix.go:216] guest clock: 1737987238.874533334
	I0127 14:13:58.915215 1860441 fix.go:229] Guest: 2025-01-27 14:13:58.874533334 +0000 UTC Remote: 2025-01-27 14:13:58.804233904 +0000 UTC m=+19.108828436 (delta=70.29943ms)
	I0127 14:13:58.915236 1860441 fix.go:200] guest clock delta is within tolerance: 70.29943ms
	I0127 14:13:58.915242 1860441 start.go:83] releasing machines lock for "no-preload-591346", held for 18.615473038s
	I0127 14:13:58.915263 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:13:58.915556 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetIP
	I0127 14:13:58.918245 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.918634 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.918657 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.918866 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:13:58.919443 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:13:58.919652 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:13:58.919771 1860441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:13:58.919824 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:13:58.919868 1860441 ssh_runner.go:195] Run: cat /version.json
	I0127 14:13:58.919892 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:13:58.922757 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.923071 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.923211 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.923239 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.923461 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:13:58.923595 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:13:58.923632 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:13:58.923667 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.923748 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:13:58.923834 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:13:58.923890 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:13:58.923948 1860441 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/no-preload-591346/id_rsa Username:docker}
	I0127 14:13:58.924028 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:13:58.924179 1860441 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/no-preload-591346/id_rsa Username:docker}
	I0127 14:13:58.999264 1860441 ssh_runner.go:195] Run: systemctl --version
	I0127 14:13:59.031329 1860441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:13:59.036832 1860441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:13:59.036902 1860441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:13:59.052655 1860441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:13:59.052678 1860441 start.go:495] detecting cgroup driver to use...
	I0127 14:13:59.052735 1860441 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 14:13:59.085051 1860441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 14:13:59.097521 1860441 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:13:59.097604 1860441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:13:59.110507 1860441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:13:59.122707 1860441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:13:59.229001 1860441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:13:59.356294 1860441 docker.go:233] disabling docker service ...
	I0127 14:13:59.356391 1860441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:13:59.369974 1860441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:13:59.381763 1860441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:13:59.511609 1860441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:13:59.616552 1860441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:13:59.629610 1860441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:13:59.646081 1860441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 14:13:59.655555 1860441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 14:13:59.665778 1860441 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 14:13:59.665871 1860441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 14:13:59.676615 1860441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:13:59.686419 1860441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 14:13:59.696354 1860441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:13:59.706101 1860441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:13:59.715813 1860441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 14:13:59.725384 1860441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 14:13:59.735083 1860441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 14:13:59.744967 1860441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:13:59.753663 1860441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:13:59.753713 1860441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:13:59.765238 1860441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:13:59.774598 1860441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:13:59.876180 1860441 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 14:13:59.904983 1860441 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 14:13:59.905083 1860441 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:13:59.909749 1860441 retry.go:31] will retry after 673.835025ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 14:14:00.583903 1860441 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:14:00.589138 1860441 start.go:563] Will wait 60s for crictl version
	I0127 14:14:00.589195 1860441 ssh_runner.go:195] Run: which crictl
	I0127 14:14:00.593021 1860441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:14:00.629756 1860441 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 14:14:00.629832 1860441 ssh_runner.go:195] Run: containerd --version
	I0127 14:14:00.657434 1860441 ssh_runner.go:195] Run: containerd --version
	I0127 14:14:00.683125 1860441 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 14:14:00.684539 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetIP
	I0127 14:14:00.687165 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:14:00.687466 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:14:00.687489 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:14:00.687712 1860441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 14:14:00.691759 1860441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:14:00.703633 1860441 kubeadm.go:883] updating cluster {Name:no-preload-591346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-591346 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:14:00.703789 1860441 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:14:00.703831 1860441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:14:00.734097 1860441 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:14:00.734129 1860441 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:14:00.734141 1860441 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.32.1 containerd true true} ...
	I0127 14:14:00.734262 1860441 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-591346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-591346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:14:00.734339 1860441 ssh_runner.go:195] Run: sudo crictl info
	I0127 14:14:00.765869 1860441 cni.go:84] Creating CNI manager for ""
	I0127 14:14:00.765895 1860441 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:14:00.765907 1860441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:14:00.765930 1860441 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-591346 NodeName:no-preload-591346 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:14:00.766048 1860441 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-591346"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.238"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:14:00.766113 1860441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:14:00.776084 1860441 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:14:00.776187 1860441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:14:00.785338 1860441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0127 14:14:00.800905 1860441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:14:00.815973 1860441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
	I0127 14:14:00.831466 1860441 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0127 14:14:00.834970 1860441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:14:00.846412 1860441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:14:00.961137 1860441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:14:00.979007 1860441 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/no-preload-591346 for IP: 192.168.39.238
	I0127 14:14:00.979035 1860441 certs.go:194] generating shared ca certs ...
	I0127 14:14:00.979057 1860441 certs.go:226] acquiring lock for ca certs: {Name:mkc6b95fb3d2c0d0c7049cde446028a0d731f231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:14:00.979284 1860441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key
	I0127 14:14:00.979344 1860441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key
	I0127 14:14:00.979357 1860441 certs.go:256] generating profile certs ...
	I0127 14:14:00.979441 1860441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/no-preload-591346/client.key
	I0127 14:14:00.979516 1860441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/no-preload-591346/apiserver.key.2cfbf245
	I0127 14:14:00.979557 1860441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/no-preload-591346/proxy-client.key
	I0127 14:14:00.979707 1860441 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem (1338 bytes)
	W0127 14:14:00.979763 1860441 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070_empty.pem, impossibly tiny 0 bytes
	I0127 14:14:00.979777 1860441 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:14:00.979804 1860441 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:14:00.979829 1860441 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:14:00.979861 1860441 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem (1675 bytes)
	I0127 14:14:00.979922 1860441 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:14:00.980866 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:14:01.021421 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:14:01.048719 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:14:01.081844 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 14:14:01.106534 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/no-preload-591346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 14:14:01.128098 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/no-preload-591346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:14:01.149631 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/no-preload-591346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:14:01.173906 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/no-preload-591346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 14:14:01.196982 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /usr/share/ca-certificates/18060702.pem (1708 bytes)
	I0127 14:14:01.218464 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:14:01.240056 1860441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem --> /usr/share/ca-certificates/1806070.pem (1338 bytes)
	I0127 14:14:01.260941 1860441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:14:01.277728 1860441 ssh_runner.go:195] Run: openssl version
	I0127 14:14:01.283482 1860441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18060702.pem && ln -fs /usr/share/ca-certificates/18060702.pem /etc/ssl/certs/18060702.pem"
	I0127 14:14:01.294327 1860441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18060702.pem
	I0127 14:14:01.298511 1860441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:10 /usr/share/ca-certificates/18060702.pem
	I0127 14:14:01.298567 1860441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18060702.pem
	I0127 14:14:01.304109 1860441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18060702.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:14:01.314260 1860441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:14:01.323711 1860441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:14:01.327726 1860441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:14:01.327785 1860441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:14:01.332923 1860441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:14:01.343259 1860441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806070.pem && ln -fs /usr/share/ca-certificates/1806070.pem /etc/ssl/certs/1806070.pem"
	I0127 14:14:01.353285 1860441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806070.pem
	I0127 14:14:01.357171 1860441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:10 /usr/share/ca-certificates/1806070.pem
	I0127 14:14:01.357225 1860441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806070.pem
	I0127 14:14:01.362258 1860441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806070.pem /etc/ssl/certs/51391683.0"
	I0127 14:14:01.372452 1860441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:14:01.376486 1860441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:14:01.381782 1860441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:14:01.386954 1860441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:14:01.392085 1860441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:14:01.397310 1860441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:14:01.402420 1860441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:14:01.407833 1860441 kubeadm.go:392] StartCluster: {Name:no-preload-591346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-591346 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:14:01.407934 1860441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 14:14:01.407987 1860441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:14:01.448357 1860441 cri.go:89] found id: "3807f7013e22832f99098e29a6bd50fc57a5afce787f42ec7f5951b4dc89d8f5"
	I0127 14:14:01.448390 1860441 cri.go:89] found id: "15e4a3ba92d0daede11574cd878c443f986dcd683de6bd942e5fe6d7e00e0e78"
	I0127 14:14:01.448396 1860441 cri.go:89] found id: "2338d0a40cb683e961dab1f1b81a6400ff04246dd5d1ed87acf9592b0d142042"
	I0127 14:14:01.448400 1860441 cri.go:89] found id: "1ad6650748362ab04b704debca699d05d7a4cf504d749690d837e04478f1fa2a"
	I0127 14:14:01.448405 1860441 cri.go:89] found id: "132a5effd50674f7b759d73e90d05a670c3716d6060793a5075ce888b2cc0706"
	I0127 14:14:01.448408 1860441 cri.go:89] found id: "3753e8b8035cb25367c90a710905ecbe94f46cdd389d8362ae0737dd5e5d81bc"
	I0127 14:14:01.448412 1860441 cri.go:89] found id: "d3cea675c42f83828be0ba2ba61de135d88887197576bd85e29490feedcc33b3"
	I0127 14:14:01.448416 1860441 cri.go:89] found id: ""
	I0127 14:14:01.448476 1860441 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 14:14:01.462282 1860441 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T14:14:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 14:14:01.462356 1860441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:14:01.471941 1860441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 14:14:01.471970 1860441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 14:14:01.472015 1860441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 14:14:01.481092 1860441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:14:01.481877 1860441 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-591346" does not appear in /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:14:01.482282 1860441 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-1798877/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-591346" cluster setting kubeconfig missing "no-preload-591346" context setting]
	I0127 14:14:01.483005 1860441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:14:01.484465 1860441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 14:14:01.494059 1860441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0127 14:14:01.494093 1860441 kubeadm.go:1160] stopping kube-system containers ...
	I0127 14:14:01.494127 1860441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 14:14:01.494208 1860441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:14:01.532764 1860441 cri.go:89] found id: "3807f7013e22832f99098e29a6bd50fc57a5afce787f42ec7f5951b4dc89d8f5"
	I0127 14:14:01.532787 1860441 cri.go:89] found id: "15e4a3ba92d0daede11574cd878c443f986dcd683de6bd942e5fe6d7e00e0e78"
	I0127 14:14:01.532791 1860441 cri.go:89] found id: "2338d0a40cb683e961dab1f1b81a6400ff04246dd5d1ed87acf9592b0d142042"
	I0127 14:14:01.532795 1860441 cri.go:89] found id: "1ad6650748362ab04b704debca699d05d7a4cf504d749690d837e04478f1fa2a"
	I0127 14:14:01.532797 1860441 cri.go:89] found id: "132a5effd50674f7b759d73e90d05a670c3716d6060793a5075ce888b2cc0706"
	I0127 14:14:01.532800 1860441 cri.go:89] found id: "3753e8b8035cb25367c90a710905ecbe94f46cdd389d8362ae0737dd5e5d81bc"
	I0127 14:14:01.532802 1860441 cri.go:89] found id: "d3cea675c42f83828be0ba2ba61de135d88887197576bd85e29490feedcc33b3"
	I0127 14:14:01.532805 1860441 cri.go:89] found id: ""
	I0127 14:14:01.532810 1860441 cri.go:252] Stopping containers: [3807f7013e22832f99098e29a6bd50fc57a5afce787f42ec7f5951b4dc89d8f5 15e4a3ba92d0daede11574cd878c443f986dcd683de6bd942e5fe6d7e00e0e78 2338d0a40cb683e961dab1f1b81a6400ff04246dd5d1ed87acf9592b0d142042 1ad6650748362ab04b704debca699d05d7a4cf504d749690d837e04478f1fa2a 132a5effd50674f7b759d73e90d05a670c3716d6060793a5075ce888b2cc0706 3753e8b8035cb25367c90a710905ecbe94f46cdd389d8362ae0737dd5e5d81bc d3cea675c42f83828be0ba2ba61de135d88887197576bd85e29490feedcc33b3]
	I0127 14:14:01.532904 1860441 ssh_runner.go:195] Run: which crictl
	I0127 14:14:01.536546 1860441 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 3807f7013e22832f99098e29a6bd50fc57a5afce787f42ec7f5951b4dc89d8f5 15e4a3ba92d0daede11574cd878c443f986dcd683de6bd942e5fe6d7e00e0e78 2338d0a40cb683e961dab1f1b81a6400ff04246dd5d1ed87acf9592b0d142042 1ad6650748362ab04b704debca699d05d7a4cf504d749690d837e04478f1fa2a 132a5effd50674f7b759d73e90d05a670c3716d6060793a5075ce888b2cc0706 3753e8b8035cb25367c90a710905ecbe94f46cdd389d8362ae0737dd5e5d81bc d3cea675c42f83828be0ba2ba61de135d88887197576bd85e29490feedcc33b3
	I0127 14:14:01.567409 1860441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 14:14:01.582679 1860441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:14:01.591615 1860441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:14:01.591629 1860441 kubeadm.go:157] found existing configuration files:
	
	I0127 14:14:01.591671 1860441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:14:01.600444 1860441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:14:01.600498 1860441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:14:01.609277 1860441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:14:01.617752 1860441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:14:01.617799 1860441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:14:01.626258 1860441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:14:01.634398 1860441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:14:01.634451 1860441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:14:01.642812 1860441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:14:01.652133 1860441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:14:01.652187 1860441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:14:01.661273 1860441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:14:01.671220 1860441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:01.791625 1860441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:02.565139 1860441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:02.760865 1860441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:02.842101 1860441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:02.926461 1860441 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:14:02.926545 1860441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:03.426880 1860441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:03.927450 1860441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:03.944012 1860441 api_server.go:72] duration metric: took 1.017547845s to wait for apiserver process to appear ...
	I0127 14:14:03.944047 1860441 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:14:03.944074 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:03.944675 1860441 api_server.go:269] stopped: https://192.168.39.238:8443/healthz: Get "https://192.168.39.238:8443/healthz": dial tcp 192.168.39.238:8443: connect: connection refused
	I0127 14:14:04.444365 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:06.438665 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:14:06.438705 1860441 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:14:06.438724 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:06.466024 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:14:06.466068 1860441 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:14:06.466086 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:06.549228 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:14:06.549259 1860441 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:14:06.944857 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:06.949346 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:14:06.949389 1860441 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:14:07.445148 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:07.449926 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:14:07.449954 1860441 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:14:07.944557 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:07.968736 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:14:07.968782 1860441 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:14:08.444403 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:08.449303 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:14:08.449329 1860441 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:14:08.945055 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:08.950502 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:14:08.950537 1860441 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:14:09.444193 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:09.449021 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:14:09.449048 1860441 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:14:09.944214 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:14:09.948902 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0127 14:14:09.955639 1860441 api_server.go:141] control plane version: v1.32.1
	I0127 14:14:09.955666 1860441 api_server.go:131] duration metric: took 6.011611013s to wait for apiserver health ...
	I0127 14:14:09.955677 1860441 cni.go:84] Creating CNI manager for ""
	I0127 14:14:09.955683 1860441 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:14:09.957359 1860441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:14:09.958826 1860441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:14:09.973360 1860441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:14:09.997788 1860441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:14:10.007305 1860441 system_pods.go:59] 8 kube-system pods found
	I0127 14:14:10.007342 1860441 system_pods.go:61] "coredns-668d6bf9bc-ssktd" [c714dd19-2353-49a0-8082-cc89422ea1c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:14:10.007349 1860441 system_pods.go:61] "etcd-no-preload-591346" [6b304b6d-2ffc-457f-92ba-be6a33487b7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 14:14:10.007359 1860441 system_pods.go:61] "kube-apiserver-no-preload-591346" [81fda0ae-f7fa-40d0-a92e-4b0c2a850bab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 14:14:10.007365 1860441 system_pods.go:61] "kube-controller-manager-no-preload-591346" [92c2b175-a66c-4469-9360-12d3c89ec857] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 14:14:10.007375 1860441 system_pods.go:61] "kube-proxy-rgbdm" [6f99797a-6538-42c6-bc7e-c3429d8f7856] Running
	I0127 14:14:10.007384 1860441 system_pods.go:61] "kube-scheduler-no-preload-591346" [46faab74-2729-4a5d-8aa2-71fbf02d2bd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 14:14:10.007388 1860441 system_pods.go:61] "metrics-server-f79f97bbb-mh2zm" [77f1c4d7-aa38-4029-8f87-60343d4b2167] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:14:10.007394 1860441 system_pods.go:61] "storage-provisioner" [e9ae1a9c-a514-4f7b-96f1-20ebe2195bbd] Running
	I0127 14:14:10.007401 1860441 system_pods.go:74] duration metric: took 9.587327ms to wait for pod list to return data ...
	I0127 14:14:10.007411 1860441 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:14:10.010615 1860441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:14:10.010643 1860441 node_conditions.go:123] node cpu capacity is 2
	I0127 14:14:10.010654 1860441 node_conditions.go:105] duration metric: took 3.23628ms to run NodePressure ...
	I0127 14:14:10.010673 1860441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:10.281697 1860441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 14:14:10.285860 1860441 kubeadm.go:739] kubelet initialised
	I0127 14:14:10.285884 1860441 kubeadm.go:740] duration metric: took 4.152304ms waiting for restarted kubelet to initialise ...
	I0127 14:14:10.285893 1860441 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:14:10.290539 1860441 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-ssktd" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:12.296400 1860441 pod_ready.go:103] pod "coredns-668d6bf9bc-ssktd" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:14.298059 1860441 pod_ready.go:103] pod "coredns-668d6bf9bc-ssktd" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:16.298995 1860441 pod_ready.go:93] pod "coredns-668d6bf9bc-ssktd" in "kube-system" namespace has status "Ready":"True"
	I0127 14:14:16.299021 1860441 pod_ready.go:82] duration metric: took 6.008453507s for pod "coredns-668d6bf9bc-ssktd" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:16.299034 1860441 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:18.305274 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:20.811285 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:21.304932 1860441 pod_ready.go:93] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:14:21.304955 1860441 pod_ready.go:82] duration metric: took 5.005913562s for pod "etcd-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:21.304964 1860441 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:21.309651 1860441 pod_ready.go:93] pod "kube-apiserver-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:14:21.309672 1860441 pod_ready.go:82] duration metric: took 4.701985ms for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:21.309681 1860441 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:22.816049 1860441 pod_ready.go:93] pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:14:22.816075 1860441 pod_ready.go:82] duration metric: took 1.50638708s for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:22.816086 1860441 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rgbdm" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:22.820559 1860441 pod_ready.go:93] pod "kube-proxy-rgbdm" in "kube-system" namespace has status "Ready":"True"
	I0127 14:14:22.820578 1860441 pod_ready.go:82] duration metric: took 4.4863ms for pod "kube-proxy-rgbdm" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:22.820594 1860441 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:22.825065 1860441 pod_ready.go:93] pod "kube-scheduler-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:14:22.825087 1860441 pod_ready.go:82] duration metric: took 4.485927ms for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:22.825101 1860441 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace to be "Ready" ...
	I0127 14:14:24.831068 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:26.832113 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:29.331175 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:31.332355 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:33.831420 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:35.831760 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:38.331252 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:40.331654 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:42.332118 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:44.332596 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:46.832404 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:49.331510 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:51.332173 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:53.831483 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:56.333027 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:58.831385 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:00.831601 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:02.831834 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:05.331615 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:07.333276 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:09.662846 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:11.830861 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:13.830899 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:15.830987 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:17.831386 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:19.832347 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:22.331767 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:24.332560 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:26.333901 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:28.831688 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:30.831934 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:32.832381 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:35.332651 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:37.830660 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:39.833070 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:42.332559 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:44.832204 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:47.331313 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:49.331636 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:51.331696 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:53.331965 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:55.333401 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:57.333447 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:59.831883 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:02.331042 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:04.331191 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:06.331576 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:08.831893 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:11.331238 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:13.830766 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:15.833005 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:18.332614 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:20.831553 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:23.331114 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:25.332784 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:27.832168 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:30.331152 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:32.831448 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:35.332240 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:37.832091 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:39.832342 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:42.331003 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:44.832452 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:47.330974 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:49.830514 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:51.831313 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:53.833004 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:56.332958 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:58.830597 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:00.831180 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:02.831867 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:05.330577 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:07.331792 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:09.332090 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:11.830920 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:14.332440 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:16.831009 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:19.330464 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:21.331062 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:23.332003 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:25.832602 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:28.332372 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:30.831167 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:32.832123 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:35.332384 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:37.832752 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:40.332819 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:42.832607 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:45.333031 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:47.334832 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:49.831455 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:51.831872 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:54.331512 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:56.833114 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:59.331988 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:01.831521 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:03.833692 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:06.331967 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:08.831715 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:10.834416 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:13.333054 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:15.382844 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:17.832213 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:20.332344 1860441 pod_ready.go:103] pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:22.825386 1860441 pod_ready.go:82] duration metric: took 4m0.000264233s for pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace to be "Ready" ...
	E0127 14:18:22.825423 1860441 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-mh2zm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 14:18:22.825451 1860441 pod_ready.go:39] duration metric: took 4m12.539540989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:18:22.825499 1860441 kubeadm.go:597] duration metric: took 4m21.353521014s to restartPrimaryControlPlane
	W0127 14:18:22.825593 1860441 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 14:18:22.825641 1860441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 14:18:24.808780 1860441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.983107553s)
	I0127 14:18:24.808876 1860441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:18:24.826643 1860441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:18:24.839502 1860441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:18:24.851227 1860441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:18:24.851251 1860441 kubeadm.go:157] found existing configuration files:
	
	I0127 14:18:24.851311 1860441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:18:24.863245 1860441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:18:24.863321 1860441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:18:24.874640 1860441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:18:24.884591 1860441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:18:24.884657 1860441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:18:24.893872 1860441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:18:24.906053 1860441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:18:24.906117 1860441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:18:24.922253 1860441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:18:24.934530 1860441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:18:24.934603 1860441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:18:24.946866 1860441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:18:24.996201 1860441 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:18:24.996335 1860441 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:18:25.118357 1860441 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:18:25.118503 1860441 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:18:25.118623 1860441 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:18:25.125938 1860441 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:18:25.128411 1860441 out.go:235]   - Generating certificates and keys ...
	I0127 14:18:25.128537 1860441 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:18:25.128628 1860441 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:18:25.128732 1860441 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:18:25.128818 1860441 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:18:25.128917 1860441 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:18:25.128996 1860441 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:18:25.129087 1860441 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:18:25.129180 1860441 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:18:25.129283 1860441 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:18:25.129390 1860441 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:18:25.129450 1860441 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:18:25.129526 1860441 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:18:25.468885 1860441 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:18:25.753602 1860441 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:18:25.961694 1860441 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:18:26.203876 1860441 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:18:26.654657 1860441 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:18:26.655345 1860441 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:18:26.660075 1860441 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:18:26.661620 1860441 out.go:235]   - Booting up control plane ...
	I0127 14:18:26.661764 1860441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:18:26.664087 1860441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:18:26.665122 1860441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:18:26.688390 1860441 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:18:26.697628 1860441 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:18:26.697715 1860441 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:18:26.888509 1860441 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:18:26.888689 1860441 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:18:27.889991 1860441 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001660219s
	I0127 14:18:27.890067 1860441 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:18:33.391978 1860441 kubeadm.go:310] [api-check] The API server is healthy after 5.502117971s
	I0127 14:18:33.411000 1860441 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:18:33.425260 1860441 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:18:33.454267 1860441 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:18:33.454490 1860441 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-591346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:18:33.472174 1860441 kubeadm.go:310] [bootstrap-token] Using token: u9grdm.ayunyys1k2escuj2
	I0127 14:18:33.473356 1860441 out.go:235]   - Configuring RBAC rules ...
	I0127 14:18:33.473510 1860441 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:18:33.484380 1860441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:18:33.494503 1860441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:18:33.499111 1860441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:18:33.504098 1860441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:18:33.508848 1860441 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:18:33.800569 1860441 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:18:34.265244 1860441 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:18:34.808307 1860441 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:18:34.808330 1860441 kubeadm.go:310] 
	I0127 14:18:34.808402 1860441 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:18:34.808413 1860441 kubeadm.go:310] 
	I0127 14:18:34.808501 1860441 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:18:34.808510 1860441 kubeadm.go:310] 
	I0127 14:18:34.808545 1860441 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:18:34.808625 1860441 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:18:34.808698 1860441 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:18:34.808709 1860441 kubeadm.go:310] 
	I0127 14:18:34.808785 1860441 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:18:34.808799 1860441 kubeadm.go:310] 
	I0127 14:18:34.808840 1860441 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:18:34.808850 1860441 kubeadm.go:310] 
	I0127 14:18:34.808914 1860441 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:18:34.808999 1860441 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:18:34.809067 1860441 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:18:34.809076 1860441 kubeadm.go:310] 
	I0127 14:18:34.809160 1860441 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:18:34.809232 1860441 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:18:34.809237 1860441 kubeadm.go:310] 
	I0127 14:18:34.809318 1860441 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token u9grdm.ayunyys1k2escuj2 \
	I0127 14:18:34.809415 1860441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e \
	I0127 14:18:34.809436 1860441 kubeadm.go:310] 	--control-plane 
	I0127 14:18:34.809441 1860441 kubeadm.go:310] 
	I0127 14:18:34.809521 1860441 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:18:34.809525 1860441 kubeadm.go:310] 
	I0127 14:18:34.809602 1860441 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u9grdm.ayunyys1k2escuj2 \
	I0127 14:18:34.809709 1860441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e 
	I0127 14:18:34.811279 1860441 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:18:34.811487 1860441 cni.go:84] Creating CNI manager for ""
	I0127 14:18:34.811506 1860441 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:18:34.813145 1860441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:18:34.814417 1860441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:18:34.831388 1860441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:18:34.859092 1860441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:18:34.859251 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:34.859331 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-591346 minikube.k8s.io/updated_at=2025_01_27T14_18_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=no-preload-591346 minikube.k8s.io/primary=true
	I0127 14:18:34.894707 1860441 ops.go:34] apiserver oom_adj: -16
	I0127 14:18:35.151233 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:35.652021 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:36.151545 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:36.651580 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:37.151976 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:37.652004 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:38.152035 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:38.651954 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:39.151847 1860441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:18:39.253425 1860441 kubeadm.go:1113] duration metric: took 4.394229093s to wait for elevateKubeSystemPrivileges
	I0127 14:18:39.253464 1860441 kubeadm.go:394] duration metric: took 4m37.845639079s to StartCluster
	I0127 14:18:39.253490 1860441 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:18:39.253580 1860441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:18:39.255360 1860441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:18:39.255620 1860441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 14:18:39.255691 1860441 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:18:39.255792 1860441 addons.go:69] Setting storage-provisioner=true in profile "no-preload-591346"
	I0127 14:18:39.255815 1860441 addons.go:238] Setting addon storage-provisioner=true in "no-preload-591346"
	W0127 14:18:39.255824 1860441 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:18:39.255821 1860441 addons.go:69] Setting default-storageclass=true in profile "no-preload-591346"
	I0127 14:18:39.255856 1860441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-591346"
	I0127 14:18:39.255868 1860441 host.go:66] Checking if "no-preload-591346" exists ...
	I0127 14:18:39.255860 1860441 addons.go:69] Setting metrics-server=true in profile "no-preload-591346"
	I0127 14:18:39.255850 1860441 addons.go:69] Setting dashboard=true in profile "no-preload-591346"
	I0127 14:18:39.255882 1860441 config.go:182] Loaded profile config "no-preload-591346": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:18:39.255893 1860441 addons.go:238] Setting addon dashboard=true in "no-preload-591346"
	W0127 14:18:39.255904 1860441 addons.go:247] addon dashboard should already be in state true
	I0127 14:18:39.255885 1860441 addons.go:238] Setting addon metrics-server=true in "no-preload-591346"
	W0127 14:18:39.255927 1860441 addons.go:247] addon metrics-server should already be in state true
	I0127 14:18:39.255956 1860441 host.go:66] Checking if "no-preload-591346" exists ...
	I0127 14:18:39.255961 1860441 host.go:66] Checking if "no-preload-591346" exists ...
	I0127 14:18:39.256308 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:39.256308 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:39.256317 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:39.256347 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:39.256353 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:39.256768 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:39.256923 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:39.256942 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:39.259805 1860441 out.go:177] * Verifying Kubernetes components...
	I0127 14:18:39.261249 1860441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:18:39.273577 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0127 14:18:39.273806 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0127 14:18:39.273832 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34563
	I0127 14:18:39.274010 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:39.274328 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:39.274373 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:39.274565 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:18:39.274584 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:39.274975 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:18:39.275006 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:39.275049 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:39.275091 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:18:39.275121 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:39.275235 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetState
	I0127 14:18:39.275472 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:39.275532 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:39.276122 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:39.276174 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:39.276202 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:39.276244 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:39.276251 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0127 14:18:39.276906 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:39.277491 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:18:39.277511 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:39.279698 1860441 addons.go:238] Setting addon default-storageclass=true in "no-preload-591346"
	W0127 14:18:39.279725 1860441 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:18:39.279756 1860441 host.go:66] Checking if "no-preload-591346" exists ...
	I0127 14:18:39.280126 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:39.280165 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:39.280389 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:39.280998 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:39.281047 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:39.295749 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44165
	I0127 14:18:39.296205 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:39.296469 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0127 14:18:39.296660 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41539
	I0127 14:18:39.296783 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:18:39.296801 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:39.296959 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:39.297179 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:39.297192 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:39.297434 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:18:39.297456 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:39.297661 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:18:39.297685 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:39.297905 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:39.297941 1860441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:39.297998 1860441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:39.298355 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:39.298379 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetState
	I0127 14:18:39.300284 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:18:39.300306 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I0127 14:18:39.300635 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:39.301066 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:18:39.301090 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:39.301379 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetState
	I0127 14:18:39.301459 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:39.301727 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetState
	I0127 14:18:39.302282 1860441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:18:39.303463 1860441 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:18:39.303534 1860441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:18:39.303558 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:18:39.303946 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:18:39.304429 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:18:39.305422 1860441 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:18:39.306271 1860441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:18:39.306858 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:18:39.307279 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:18:39.307369 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:18:39.307453 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:18:39.307635 1860441 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:18:39.307652 1860441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:18:39.307662 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:18:39.307669 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:18:39.307816 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:18:39.307956 1860441 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/no-preload-591346/id_rsa Username:docker}
	I0127 14:18:39.308442 1860441 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:18:39.309762 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:18:39.309779 1860441 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:18:39.309803 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:18:39.311276 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:18:39.312086 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:18:39.312111 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:18:39.312134 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:18:39.312251 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:18:39.312434 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:18:39.312640 1860441 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/no-preload-591346/id_rsa Username:docker}
	I0127 14:18:39.313242 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:18:39.313761 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:18:39.313789 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:18:39.313956 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:18:39.314123 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:18:39.314282 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:18:39.314431 1860441 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/no-preload-591346/id_rsa Username:docker}
	I0127 14:18:39.317337 1860441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0127 14:18:39.318088 1860441 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:39.318547 1860441 main.go:141] libmachine: Using API Version  1
	I0127 14:18:39.318570 1860441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:39.318862 1860441 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:39.319043 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetState
	I0127 14:18:39.320571 1860441 main.go:141] libmachine: (no-preload-591346) Calling .DriverName
	I0127 14:18:39.320781 1860441 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:18:39.320798 1860441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:18:39.320816 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHHostname
	I0127 14:18:39.323284 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:18:39.323577 1860441 main.go:141] libmachine: (no-preload-591346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:46:74", ip: ""} in network mk-no-preload-591346: {Iface:virbr1 ExpiryTime:2025-01-27 15:13:51 +0000 UTC Type:0 Mac:52:54:00:50:46:74 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:no-preload-591346 Clientid:01:52:54:00:50:46:74}
	I0127 14:18:39.323601 1860441 main.go:141] libmachine: (no-preload-591346) DBG | domain no-preload-591346 has defined IP address 192.168.39.238 and MAC address 52:54:00:50:46:74 in network mk-no-preload-591346
	I0127 14:18:39.323749 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHPort
	I0127 14:18:39.323882 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHKeyPath
	I0127 14:18:39.324006 1860441 main.go:141] libmachine: (no-preload-591346) Calling .GetSSHUsername
	I0127 14:18:39.324095 1860441 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/no-preload-591346/id_rsa Username:docker}
	I0127 14:18:39.465148 1860441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:18:39.487114 1860441 node_ready.go:35] waiting up to 6m0s for node "no-preload-591346" to be "Ready" ...
	I0127 14:18:39.522533 1860441 node_ready.go:49] node "no-preload-591346" has status "Ready":"True"
	I0127 14:18:39.522561 1860441 node_ready.go:38] duration metric: took 35.412873ms for node "no-preload-591346" to be "Ready" ...
	I0127 14:18:39.522574 1860441 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:18:39.531728 1860441 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:39.587245 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:18:39.587276 1860441 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:18:39.619832 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:18:39.633436 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:18:39.643758 1860441 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:18:39.643785 1860441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:18:39.682571 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:18:39.682605 1860441 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:18:39.717788 1860441 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:18:39.717818 1860441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:18:39.739774 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:18:39.739799 1860441 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:18:39.776579 1860441 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:18:39.776612 1860441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:18:39.821641 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:18:39.821669 1860441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:18:39.837528 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:18:39.899562 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:18:39.899592 1860441 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:18:39.941841 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:18:39.941883 1860441 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:18:39.958020 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:18:39.958049 1860441 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:18:39.985706 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:18:39.985733 1860441 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:18:40.018166 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:18:40.018198 1860441 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:18:40.049338 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:18:40.335449 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335486 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335522 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335544 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335886 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.335906 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.335921 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.335932 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335939 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335940 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.336011 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336058 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.336071 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.336079 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.336199 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336202 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.336210 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.336321 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336339 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.361215 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.361236 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.361528 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.361572 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.361588 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.976702 1860441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139130092s)
	I0127 14:18:40.976753 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.976768 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.977190 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.977233 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.977244 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.977254 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.977278 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.977544 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.977626 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.977659 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.977685 1860441 addons.go:479] Verifying addon metrics-server=true in "no-preload-591346"
	I0127 14:18:41.537877 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:41.993401 1860441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.943993844s)
	I0127 14:18:41.993457 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:41.993474 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:41.993713 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:41.993737 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:41.993755 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:41.993778 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:41.993785 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:41.994133 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:41.994158 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:41.994172 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:41.995251 1860441 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-591346 addons enable metrics-server
	
	I0127 14:18:41.996556 1860441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 14:18:41.997692 1860441 addons.go:514] duration metric: took 2.74201161s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 14:18:43.539748 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:46.040082 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:47.537709 1860441 pod_ready.go:93] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.537735 1860441 pod_ready.go:82] duration metric: took 8.005981983s for pod "etcd-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.537745 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.545174 1860441 pod_ready.go:93] pod "kube-apiserver-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.545199 1860441 pod_ready.go:82] duration metric: took 7.447836ms for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.545210 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.564920 1860441 pod_ready.go:93] pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.564957 1860441 pod_ready.go:82] duration metric: took 19.735587ms for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.564973 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k69dv" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.588782 1860441 pod_ready.go:93] pod "kube-proxy-k69dv" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.588811 1860441 pod_ready.go:82] duration metric: took 23.829861ms for pod "kube-proxy-k69dv" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.588824 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.598620 1860441 pod_ready.go:93] pod "kube-scheduler-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.598656 1860441 pod_ready.go:82] duration metric: took 9.822306ms for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.598668 1860441 pod_ready.go:39] duration metric: took 8.076081083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:18:47.598693 1860441 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:18:47.598793 1860441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:18:47.615862 1860441 api_server.go:72] duration metric: took 8.36019503s to wait for apiserver process to appear ...
	I0127 14:18:47.615895 1860441 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:18:47.615918 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:18:47.631872 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0127 14:18:47.632742 1860441 api_server.go:141] control plane version: v1.32.1
	I0127 14:18:47.632766 1860441 api_server.go:131] duration metric: took 16.863539ms to wait for apiserver health ...
	I0127 14:18:47.632774 1860441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:18:47.739770 1860441 system_pods.go:59] 9 kube-system pods found
	I0127 14:18:47.739814 1860441 system_pods.go:61] "coredns-668d6bf9bc-cm66w" [97ffe415-a70c-44a4-aa07-5b99576c749d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:18:47.739824 1860441 system_pods.go:61] "coredns-668d6bf9bc-lq9hg" [688b4191-8c28-440b-bc93-d52964fe105c] Running
	I0127 14:18:47.739833 1860441 system_pods.go:61] "etcd-no-preload-591346" [01ae260c-cbf6-4f04-be4e-565f3f408c45] Running
	I0127 14:18:47.739838 1860441 system_pods.go:61] "kube-apiserver-no-preload-591346" [1433350f-5302-42e1-8763-0f8bbde34676] Running
	I0127 14:18:47.739842 1860441 system_pods.go:61] "kube-controller-manager-no-preload-591346" [49eab0a5-09c9-4a2d-9913-1b45c145b52a] Running
	I0127 14:18:47.739846 1860441 system_pods.go:61] "kube-proxy-k69dv" [393d6681-7d87-479a-94d3-5ff6cbfe16ed] Running
	I0127 14:18:47.739849 1860441 system_pods.go:61] "kube-scheduler-no-preload-591346" [9f5af2ad-71a3-4481-a18a-8477f843553a] Running
	I0127 14:18:47.739855 1860441 system_pods.go:61] "metrics-server-f79f97bbb-fqckz" [30644e2b-7988-4b55-aa94-fe774b820ed4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:18:47.739859 1860441 system_pods.go:61] "storage-provisioner" [f10d2d4c-7f96-4ff6-b6ae-71b7918fd3ee] Running
	I0127 14:18:47.739866 1860441 system_pods.go:74] duration metric: took 107.08564ms to wait for pod list to return data ...
	I0127 14:18:47.739874 1860441 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:18:47.936494 1860441 default_sa.go:45] found service account: "default"
	I0127 14:18:47.936524 1860441 default_sa.go:55] duration metric: took 196.641742ms for default service account to be created ...
	I0127 14:18:47.936536 1860441 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:18:48.139726 1860441 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-591346 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-591346 -n no-preload-591346
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-591346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-591346 logs -n 25: (1.203089991s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-908018        | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:12 UTC | 27 Jan 25 14:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:12 UTC | 27 Jan 25 14:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-635679                 | embed-certs-635679           | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-635679                                  | embed-certs-635679           | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-591346                  | no-preload-591346            | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-591346                                   | no-preload-591346            | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-212529       | default-k8s-diff-port-212529 | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-212529 | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC |                     |
	|         | default-k8s-diff-port-212529                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-908018             | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-908018 image                           | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	| delete  | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	| start   | -p newest-cni-309688 --memory=2200 --alsologtostderr   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-309688             | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-309688                  | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-309688 --memory=2200 --alsologtostderr   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-309688 image list                           | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	| delete  | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:18:41
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:18:41.854015 1863329 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:18:41.854179 1863329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:18:41.854190 1863329 out.go:358] Setting ErrFile to fd 2...
	I0127 14:18:41.854197 1863329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:18:41.854387 1863329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 14:18:41.855024 1863329 out.go:352] Setting JSON to false
	I0127 14:18:41.856109 1863329 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":39663,"bootTime":1737947859,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:18:41.856224 1863329 start.go:139] virtualization: kvm guest
	I0127 14:18:41.858116 1863329 out.go:177] * [newest-cni-309688] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:18:41.859411 1863329 notify.go:220] Checking for updates...
	I0127 14:18:41.859457 1863329 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:18:41.860616 1863329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:18:41.861927 1863329 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:18:41.863092 1863329 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 14:18:41.864171 1863329 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:18:41.865251 1863329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:18:41.866889 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:18:41.867384 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:41.867442 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:41.883915 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0127 14:18:41.884516 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:41.885154 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:18:41.885177 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:41.885640 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:41.885855 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:18:41.886202 1863329 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:18:41.886661 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:41.886728 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:41.904702 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
	I0127 14:18:41.905242 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:41.905789 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:18:41.905815 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:41.906241 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:41.906460 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:18:41.947119 1863329 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:18:41.948433 1863329 start.go:297] selected driver: kvm2
	I0127 14:18:41.948449 1863329 start.go:901] validating driver "kvm2" against &{Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:18:41.948615 1863329 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:18:41.949339 1863329 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:18:41.949417 1863329 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-1798877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:18:41.966476 1863329 install.go:137] /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:18:41.966978 1863329 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 14:18:41.967016 1863329 cni.go:84] Creating CNI manager for ""
	I0127 14:18:41.967062 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:18:41.967095 1863329 start.go:340] cluster config:
	{Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:18:41.967211 1863329 iso.go:125] acquiring lock: {Name:mk3326e4e64b9d95edc1453384276c21a2957c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:18:41.969136 1863329 out.go:177] * Starting "newest-cni-309688" primary control-plane node in "newest-cni-309688" cluster
	I0127 14:18:41.970047 1863329 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:18:41.970083 1863329 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 14:18:41.970090 1863329 cache.go:56] Caching tarball of preloaded images
	I0127 14:18:41.970203 1863329 preload.go:172] Found /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 14:18:41.970215 1863329 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 14:18:41.970348 1863329 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/config.json ...
	I0127 14:18:41.970570 1863329 start.go:360] acquireMachinesLock for newest-cni-309688: {Name:mk6fcac41a7a21b211b65e56994e625852d1a781 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:18:41.970626 1863329 start.go:364] duration metric: took 32.288µs to acquireMachinesLock for "newest-cni-309688"
	I0127 14:18:41.970646 1863329 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:18:41.970657 1863329 fix.go:54] fixHost starting: 
	I0127 14:18:41.971072 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:41.971127 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:41.987333 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0127 14:18:41.987957 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:41.988457 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:18:41.988482 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:41.988963 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:41.989252 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:18:41.989407 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:18:41.991188 1863329 fix.go:112] recreateIfNeeded on newest-cni-309688: state=Stopped err=<nil>
	I0127 14:18:41.991220 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	W0127 14:18:41.991396 1863329 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:18:41.993400 1863329 out.go:177] * Restarting existing kvm2 VM for "newest-cni-309688" ...
	I0127 14:18:39.739774 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:18:39.739799 1860441 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:18:39.776579 1860441 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:18:39.776612 1860441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:18:39.821641 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:18:39.821669 1860441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:18:39.837528 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:18:39.899562 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:18:39.899592 1860441 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:18:39.941841 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:18:39.941883 1860441 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:18:39.958020 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:18:39.958049 1860441 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:18:39.985706 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:18:39.985733 1860441 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:18:40.018166 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:18:40.018198 1860441 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:18:40.049338 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:18:40.335449 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335486 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335522 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335544 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335886 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.335906 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.335921 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.335932 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335939 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335940 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.336011 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336058 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.336071 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.336079 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.336199 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336202 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.336210 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.336321 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336339 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.361215 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.361236 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.361528 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.361572 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.361588 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.976702 1860441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139130092s)
	I0127 14:18:40.976753 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.976768 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.977190 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.977233 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.977244 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.977254 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.977278 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.977544 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.977626 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.977659 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.977685 1860441 addons.go:479] Verifying addon metrics-server=true in "no-preload-591346"
	I0127 14:18:41.537877 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:41.993401 1860441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.943993844s)
	I0127 14:18:41.993457 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:41.993474 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:41.993713 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:41.993737 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:41.993755 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:41.993778 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:41.993785 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:41.994133 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:41.994158 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:41.994172 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:41.995251 1860441 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-591346 addons enable metrics-server
	
	I0127 14:18:41.996556 1860441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 14:18:41.997692 1860441 addons.go:514] duration metric: took 2.74201161s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 14:18:43.539748 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:40.906503 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:42.906895 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:45.405827 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:41.996357 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Start
	I0127 14:18:41.996613 1863329 main.go:141] libmachine: (newest-cni-309688) starting domain...
	I0127 14:18:41.996630 1863329 main.go:141] libmachine: (newest-cni-309688) ensuring networks are active...
	I0127 14:18:41.997620 1863329 main.go:141] libmachine: (newest-cni-309688) Ensuring network default is active
	I0127 14:18:41.998106 1863329 main.go:141] libmachine: (newest-cni-309688) Ensuring network mk-newest-cni-309688 is active
	I0127 14:18:41.998535 1863329 main.go:141] libmachine: (newest-cni-309688) getting domain XML...
	I0127 14:18:41.999349 1863329 main.go:141] libmachine: (newest-cni-309688) creating domain...
	I0127 14:18:43.362085 1863329 main.go:141] libmachine: (newest-cni-309688) waiting for IP...
	I0127 14:18:43.363264 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:43.363792 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:43.363901 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.363777 1863364 retry.go:31] will retry after 245.978549ms: waiting for domain to come up
	I0127 14:18:43.611613 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:43.612280 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:43.612314 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.612267 1863364 retry.go:31] will retry after 277.473907ms: waiting for domain to come up
	I0127 14:18:43.891925 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:43.892577 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:43.892608 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.892527 1863364 retry.go:31] will retry after 327.737062ms: waiting for domain to come up
	I0127 14:18:44.221804 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:44.222337 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:44.222385 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:44.222298 1863364 retry.go:31] will retry after 472.286938ms: waiting for domain to come up
	I0127 14:18:44.695804 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:44.696473 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:44.696498 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:44.696438 1863364 retry.go:31] will retry after 556.965256ms: waiting for domain to come up
	I0127 14:18:45.254693 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:45.255242 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:45.255276 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:45.255189 1863364 retry.go:31] will retry after 809.038394ms: waiting for domain to come up
	I0127 14:18:46.066036 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:46.066585 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:46.066616 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:46.066540 1863364 retry.go:31] will retry after 758.303359ms: waiting for domain to come up
	I0127 14:18:46.826373 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:46.826997 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:46.827029 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:46.826933 1863364 retry.go:31] will retry after 1.102767077s: waiting for domain to come up
	I0127 14:18:46.040082 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:47.537709 1860441 pod_ready.go:93] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.537735 1860441 pod_ready.go:82] duration metric: took 8.005981983s for pod "etcd-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.537745 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.545174 1860441 pod_ready.go:93] pod "kube-apiserver-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.545199 1860441 pod_ready.go:82] duration metric: took 7.447836ms for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.545210 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.564920 1860441 pod_ready.go:93] pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.564957 1860441 pod_ready.go:82] duration metric: took 19.735587ms for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.564973 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k69dv" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.588782 1860441 pod_ready.go:93] pod "kube-proxy-k69dv" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.588811 1860441 pod_ready.go:82] duration metric: took 23.829861ms for pod "kube-proxy-k69dv" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.588824 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.598620 1860441 pod_ready.go:93] pod "kube-scheduler-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.598656 1860441 pod_ready.go:82] duration metric: took 9.822306ms for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.598668 1860441 pod_ready.go:39] duration metric: took 8.076081083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:18:47.598693 1860441 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:18:47.598793 1860441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:18:47.615862 1860441 api_server.go:72] duration metric: took 8.36019503s to wait for apiserver process to appear ...
	I0127 14:18:47.615895 1860441 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:18:47.615918 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:18:47.631872 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0127 14:18:47.632742 1860441 api_server.go:141] control plane version: v1.32.1
	I0127 14:18:47.632766 1860441 api_server.go:131] duration metric: took 16.863539ms to wait for apiserver health ...
	I0127 14:18:47.632774 1860441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:18:47.739770 1860441 system_pods.go:59] 9 kube-system pods found
	I0127 14:18:47.739814 1860441 system_pods.go:61] "coredns-668d6bf9bc-cm66w" [97ffe415-a70c-44a4-aa07-5b99576c749d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:18:47.739824 1860441 system_pods.go:61] "coredns-668d6bf9bc-lq9hg" [688b4191-8c28-440b-bc93-d52964fe105c] Running
	I0127 14:18:47.739833 1860441 system_pods.go:61] "etcd-no-preload-591346" [01ae260c-cbf6-4f04-be4e-565f3f408c45] Running
	I0127 14:18:47.739838 1860441 system_pods.go:61] "kube-apiserver-no-preload-591346" [1433350f-5302-42e1-8763-0f8bbde34676] Running
	I0127 14:18:47.739842 1860441 system_pods.go:61] "kube-controller-manager-no-preload-591346" [49eab0a5-09c9-4a2d-9913-1b45c145b52a] Running
	I0127 14:18:47.739846 1860441 system_pods.go:61] "kube-proxy-k69dv" [393d6681-7d87-479a-94d3-5ff6cbfe16ed] Running
	I0127 14:18:47.739849 1860441 system_pods.go:61] "kube-scheduler-no-preload-591346" [9f5af2ad-71a3-4481-a18a-8477f843553a] Running
	I0127 14:18:47.739855 1860441 system_pods.go:61] "metrics-server-f79f97bbb-fqckz" [30644e2b-7988-4b55-aa94-fe774b820ed4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:18:47.739859 1860441 system_pods.go:61] "storage-provisioner" [f10d2d4c-7f96-4ff6-b6ae-71b7918fd3ee] Running
	I0127 14:18:47.739866 1860441 system_pods.go:74] duration metric: took 107.08564ms to wait for pod list to return data ...
	I0127 14:18:47.739874 1860441 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:18:47.936494 1860441 default_sa.go:45] found service account: "default"
	I0127 14:18:47.936524 1860441 default_sa.go:55] duration metric: took 196.641742ms for default service account to be created ...
	I0127 14:18:47.936536 1860441 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:18:48.139726 1860441 system_pods.go:87] 9 kube-system pods found
	I0127 14:18:47.405959 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:49.408149 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:47.931337 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:47.931793 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:47.931838 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:47.931776 1863364 retry.go:31] will retry after 1.120510293s: waiting for domain to come up
	I0127 14:18:49.053548 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:49.054204 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:49.054231 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:49.054156 1863364 retry.go:31] will retry after 1.733549309s: waiting for domain to come up
	I0127 14:18:50.790083 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:50.790567 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:50.790650 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:50.790566 1863364 retry.go:31] will retry after 1.990202359s: waiting for domain to come up
	I0127 14:18:51.906048 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:53.906496 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:52.782229 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:52.782850 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:52.782892 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:52.782738 1863364 retry.go:31] will retry after 2.327681841s: waiting for domain to come up
	I0127 14:18:55.113291 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:55.113832 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:55.113864 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:55.113778 1863364 retry.go:31] will retry after 3.526138042s: waiting for domain to come up
	I0127 14:18:55.906587 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:58.405047 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:58.641406 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:58.642022 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:58.642056 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:58.641994 1863364 retry.go:31] will retry after 5.217691775s: waiting for domain to come up
	I0127 14:19:00.906487 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:03.405134 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:05.405708 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:03.862320 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.862779 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has current primary IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.862804 1863329 main.go:141] libmachine: (newest-cni-309688) found domain IP: 192.168.72.17
	I0127 14:19:03.862815 1863329 main.go:141] libmachine: (newest-cni-309688) reserving static IP address...
	I0127 14:19:03.863295 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "newest-cni-309688", mac: "52:54:00:1b:25:ab", ip: "192.168.72.17"} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.863323 1863329 main.go:141] libmachine: (newest-cni-309688) reserved static IP address 192.168.72.17 for domain newest-cni-309688
	I0127 14:19:03.863342 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | skip adding static IP to network mk-newest-cni-309688 - found existing host DHCP lease matching {name: "newest-cni-309688", mac: "52:54:00:1b:25:ab", ip: "192.168.72.17"}
	I0127 14:19:03.863372 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Getting to WaitForSSH function...
	I0127 14:19:03.863389 1863329 main.go:141] libmachine: (newest-cni-309688) waiting for SSH...
	I0127 14:19:03.865894 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.866214 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.866242 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.866399 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Using SSH client type: external
	I0127 14:19:03.866428 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa (-rw-------)
	I0127 14:19:03.866460 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:19:03.866485 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | About to run SSH command:
	I0127 14:19:03.866510 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | exit 0
	I0127 14:19:03.986391 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | SSH cmd err, output: <nil>: 
	I0127 14:19:03.986778 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetConfigRaw
	I0127 14:19:03.987411 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:03.990205 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.990686 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.990714 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.990989 1863329 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/config.json ...
	I0127 14:19:03.991197 1863329 machine.go:93] provisionDockerMachine start ...
	I0127 14:19:03.991218 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:03.991433 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:03.993663 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.993956 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.994002 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.994179 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:03.994359 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:03.994518 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:03.994653 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:03.994863 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:03.995069 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:03.995080 1863329 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:19:04.094835 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 14:19:04.094866 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
	I0127 14:19:04.095102 1863329 buildroot.go:166] provisioning hostname "newest-cni-309688"
	I0127 14:19:04.095129 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
	I0127 14:19:04.095318 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.097835 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.098248 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.098281 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.098404 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.098576 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.098735 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.098905 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.099088 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:04.099267 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:04.099282 1863329 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-309688 && echo "newest-cni-309688" | sudo tee /etc/hostname
	I0127 14:19:04.213036 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-309688
	
	I0127 14:19:04.213082 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.215824 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.216184 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.216208 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.216357 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.216549 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.216671 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.216793 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.216979 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:04.217204 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:04.217230 1863329 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-309688' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-309688/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-309688' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:19:04.329169 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:19:04.329206 1863329 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-1798877/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-1798877/.minikube}
	I0127 14:19:04.329248 1863329 buildroot.go:174] setting up certificates
	I0127 14:19:04.329259 1863329 provision.go:84] configureAuth start
	I0127 14:19:04.329269 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
	I0127 14:19:04.329540 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:04.332411 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.332850 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.332878 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.333078 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.335728 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.336143 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.336174 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.336351 1863329 provision.go:143] copyHostCerts
	I0127 14:19:04.336415 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem, removing ...
	I0127 14:19:04.336439 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem
	I0127 14:19:04.336527 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem (1078 bytes)
	I0127 14:19:04.336664 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem, removing ...
	I0127 14:19:04.336677 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem
	I0127 14:19:04.336718 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem (1123 bytes)
	I0127 14:19:04.336806 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem, removing ...
	I0127 14:19:04.336817 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem
	I0127 14:19:04.336852 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem (1675 bytes)
	I0127 14:19:04.336995 1863329 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem org=jenkins.newest-cni-309688 san=[127.0.0.1 192.168.72.17 localhost minikube newest-cni-309688]
	I0127 14:19:04.445795 1863329 provision.go:177] copyRemoteCerts
	I0127 14:19:04.445894 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:19:04.445928 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.448735 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.449074 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.449106 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.449317 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.449501 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.449677 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.449816 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.528783 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:19:04.552897 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 14:19:04.575992 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:19:04.598152 1863329 provision.go:87] duration metric: took 268.879651ms to configureAuth
	I0127 14:19:04.598183 1863329 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:19:04.598397 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:19:04.598411 1863329 machine.go:96] duration metric: took 607.201271ms to provisionDockerMachine
	I0127 14:19:04.598421 1863329 start.go:293] postStartSetup for "newest-cni-309688" (driver="kvm2")
	I0127 14:19:04.598437 1863329 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:19:04.598481 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.598842 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:19:04.598874 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.601257 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.601599 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.601628 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.601759 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.601945 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.602093 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.602260 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.685084 1863329 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:19:04.689047 1863329 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:19:04.689081 1863329 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/addons for local assets ...
	I0127 14:19:04.689137 1863329 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/files for local assets ...
	I0127 14:19:04.689212 1863329 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem -> 18060702.pem in /etc/ssl/certs
	I0127 14:19:04.689300 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:19:04.698109 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:19:04.723269 1863329 start.go:296] duration metric: took 124.828224ms for postStartSetup
	I0127 14:19:04.723315 1863329 fix.go:56] duration metric: took 22.752659687s for fixHost
	I0127 14:19:04.723339 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.726123 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.726570 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.726601 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.726820 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.727042 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.727229 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.727405 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.727627 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:04.727869 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:04.727885 1863329 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:19:04.831094 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987544.794055340
	
	I0127 14:19:04.831118 1863329 fix.go:216] guest clock: 1737987544.794055340
	I0127 14:19:04.831124 1863329 fix.go:229] Guest: 2025-01-27 14:19:04.79405534 +0000 UTC Remote: 2025-01-27 14:19:04.723319581 +0000 UTC m=+22.912787075 (delta=70.735759ms)
	I0127 14:19:04.831145 1863329 fix.go:200] guest clock delta is within tolerance: 70.735759ms
	I0127 14:19:04.831149 1863329 start.go:83] releasing machines lock for "newest-cni-309688", held for 22.860512585s
	I0127 14:19:04.831167 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.831433 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:04.834349 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.834694 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.834718 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.834915 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.835447 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.835626 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.835729 1863329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:19:04.835772 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.835799 1863329 ssh_runner.go:195] Run: cat /version.json
	I0127 14:19:04.835821 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.838501 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.838695 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.838855 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.838881 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.839077 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.839082 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.839117 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.839262 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.839272 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.839481 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.839482 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.839635 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.839648 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.839742 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.942379 1863329 ssh_runner.go:195] Run: systemctl --version
	I0127 14:19:04.948168 1863329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:19:04.953645 1863329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:19:04.953703 1863329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:19:04.969617 1863329 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:19:04.969646 1863329 start.go:495] detecting cgroup driver to use...
	I0127 14:19:04.969742 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 14:19:05.001151 1863329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 14:19:05.014859 1863329 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:19:05.014928 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:19:05.030145 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:19:05.044008 1863329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:19:05.174941 1863329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:19:05.330526 1863329 docker.go:233] disabling docker service ...
	I0127 14:19:05.330619 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:19:05.345183 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:19:05.357628 1863329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:19:05.474635 1863329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:19:05.587063 1863329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:19:05.600224 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:19:05.616896 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 14:19:05.628539 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 14:19:05.639531 1863329 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 14:19:05.639605 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 14:19:05.649978 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:19:05.659986 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 14:19:05.669665 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:19:05.680018 1863329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:19:05.690041 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 14:19:05.699586 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 14:19:05.709482 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 14:19:05.719643 1863329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:19:05.728454 1863329 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:19:05.728520 1863329 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:19:05.743292 1863329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:19:05.752875 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:05.862682 1863329 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 14:19:05.897001 1863329 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 14:19:05.897074 1863329 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:19:05.901946 1863329 retry.go:31] will retry after 1.257073282s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 14:19:07.159917 1863329 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:19:07.165117 1863329 start.go:563] Will wait 60s for crictl version
	I0127 14:19:07.165209 1863329 ssh_runner.go:195] Run: which crictl
	I0127 14:19:07.168995 1863329 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:19:07.209167 1863329 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 14:19:07.209244 1863329 ssh_runner.go:195] Run: containerd --version
	I0127 14:19:07.236320 1863329 ssh_runner.go:195] Run: containerd --version
	I0127 14:19:07.261054 1863329 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 14:19:07.262245 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:07.265288 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:07.265739 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:07.265772 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:07.265980 1863329 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 14:19:07.270111 1863329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:19:07.283905 1863329 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 14:19:07.406716 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:09.905446 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:07.285143 1863329 kubeadm.go:883] updating cluster {Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:19:07.285271 1863329 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:19:07.285342 1863329 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:19:07.314913 1863329 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:19:07.314944 1863329 containerd.go:534] Images already preloaded, skipping extraction
	I0127 14:19:07.315010 1863329 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:19:07.345742 1863329 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:19:07.345770 1863329 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:19:07.345779 1863329 kubeadm.go:934] updating node { 192.168.72.17 8443 v1.32.1 containerd true true} ...
	I0127 14:19:07.345897 1863329 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-309688 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:19:07.345956 1863329 ssh_runner.go:195] Run: sudo crictl info
	I0127 14:19:07.379712 1863329 cni.go:84] Creating CNI manager for ""
	I0127 14:19:07.379740 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:19:07.379759 1863329 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 14:19:07.379800 1863329 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.17 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-309688 NodeName:newest-cni-309688 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:19:07.379979 1863329 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-309688"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.17"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.17"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:19:07.380049 1863329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:19:07.390315 1863329 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:19:07.390456 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:19:07.399585 1863329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0127 14:19:07.417531 1863329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:19:07.433514 1863329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 14:19:07.449318 1863329 ssh_runner.go:195] Run: grep 192.168.72.17	control-plane.minikube.internal$ /etc/hosts
	I0127 14:19:07.452848 1863329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:19:07.464375 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:07.590492 1863329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:19:07.609018 1863329 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688 for IP: 192.168.72.17
	I0127 14:19:07.609048 1863329 certs.go:194] generating shared ca certs ...
	I0127 14:19:07.609072 1863329 certs.go:226] acquiring lock for ca certs: {Name:mkc6b95fb3d2c0d0c7049cde446028a0d731f231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:07.609277 1863329 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key
	I0127 14:19:07.609328 1863329 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key
	I0127 14:19:07.609339 1863329 certs.go:256] generating profile certs ...
	I0127 14:19:07.609434 1863329 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/client.key
	I0127 14:19:07.609500 1863329 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.key.54b7a6ae
	I0127 14:19:07.609534 1863329 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.key
	I0127 14:19:07.609661 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem (1338 bytes)
	W0127 14:19:07.609700 1863329 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070_empty.pem, impossibly tiny 0 bytes
	I0127 14:19:07.609707 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:19:07.609732 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:19:07.609776 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:19:07.609807 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem (1675 bytes)
	I0127 14:19:07.609872 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:19:07.613389 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:19:07.649675 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:19:07.678577 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:19:07.707466 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 14:19:07.736820 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 14:19:07.764078 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:19:07.791040 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:19:07.817979 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:19:07.846978 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:19:07.869002 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem --> /usr/share/ca-certificates/1806070.pem (1338 bytes)
	I0127 14:19:07.892530 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /usr/share/ca-certificates/18060702.pem (1708 bytes)
	I0127 14:19:07.917138 1863329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:19:07.933638 1863329 ssh_runner.go:195] Run: openssl version
	I0127 14:19:07.939662 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:19:07.951267 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:07.955439 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:07.955494 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:07.961014 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:19:07.972145 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806070.pem && ln -fs /usr/share/ca-certificates/1806070.pem /etc/ssl/certs/1806070.pem"
	I0127 14:19:07.983517 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806070.pem
	I0127 14:19:07.987671 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:10 /usr/share/ca-certificates/1806070.pem
	I0127 14:19:07.987719 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806070.pem
	I0127 14:19:07.993079 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806070.pem /etc/ssl/certs/51391683.0"
	I0127 14:19:08.004139 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18060702.pem && ln -fs /usr/share/ca-certificates/18060702.pem /etc/ssl/certs/18060702.pem"
	I0127 14:19:08.015248 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18060702.pem
	I0127 14:19:08.019068 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:10 /usr/share/ca-certificates/18060702.pem
	I0127 14:19:08.019113 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18060702.pem
	I0127 14:19:08.024062 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18060702.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:19:08.033948 1863329 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:19:08.038251 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:19:08.043547 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:19:08.048804 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:19:08.054182 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:19:08.059290 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:19:08.064227 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:19:08.069315 1863329 kubeadm.go:392] StartCluster: {Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:19:08.069441 1863329 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 14:19:08.069490 1863329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:19:08.106407 1863329 cri.go:89] found id: "44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666"
	I0127 14:19:08.106434 1863329 cri.go:89] found id: "d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186"
	I0127 14:19:08.106441 1863329 cri.go:89] found id: "7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf"
	I0127 14:19:08.106446 1863329 cri.go:89] found id: "72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199"
	I0127 14:19:08.106450 1863329 cri.go:89] found id: "0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698"
	I0127 14:19:08.106455 1863329 cri.go:89] found id: "0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63"
	I0127 14:19:08.106459 1863329 cri.go:89] found id: "2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb"
	I0127 14:19:08.106463 1863329 cri.go:89] found id: "89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe"
	I0127 14:19:08.106467 1863329 cri.go:89] found id: ""
	I0127 14:19:08.106525 1863329 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 14:19:08.121718 1863329 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T14:19:08Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 14:19:08.121817 1863329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:19:08.131128 1863329 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 14:19:08.131152 1863329 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 14:19:08.131206 1863329 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 14:19:08.141323 1863329 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:19:08.142436 1863329 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-309688" does not appear in /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:19:08.143126 1863329 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-1798877/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-309688" cluster setting kubeconfig missing "newest-cni-309688" context setting]
	I0127 14:19:08.144090 1863329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:08.145938 1863329 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 14:19:08.155827 1863329 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.17
	I0127 14:19:08.155862 1863329 kubeadm.go:1160] stopping kube-system containers ...
	I0127 14:19:08.155887 1863329 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 14:19:08.155943 1863329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:19:08.191753 1863329 cri.go:89] found id: "44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666"
	I0127 14:19:08.191787 1863329 cri.go:89] found id: "d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186"
	I0127 14:19:08.191794 1863329 cri.go:89] found id: "7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf"
	I0127 14:19:08.191799 1863329 cri.go:89] found id: "72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199"
	I0127 14:19:08.191804 1863329 cri.go:89] found id: "0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698"
	I0127 14:19:08.191808 1863329 cri.go:89] found id: "0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63"
	I0127 14:19:08.191812 1863329 cri.go:89] found id: "2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb"
	I0127 14:19:08.191817 1863329 cri.go:89] found id: "89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe"
	I0127 14:19:08.191822 1863329 cri.go:89] found id: ""
	I0127 14:19:08.191829 1863329 cri.go:252] Stopping containers: [44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666 d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186 7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf 72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199 0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698 0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63 2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb 89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe]
	I0127 14:19:08.191909 1863329 ssh_runner.go:195] Run: which crictl
	I0127 14:19:08.195781 1863329 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666 d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186 7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf 72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199 0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698 0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63 2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb 89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe
	I0127 14:19:08.232200 1863329 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 14:19:08.248830 1863329 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:19:08.258186 1863329 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:19:08.258248 1863329 kubeadm.go:157] found existing configuration files:
	
	I0127 14:19:08.258301 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:19:08.266710 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:19:08.266787 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:19:08.276679 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:19:08.285327 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:19:08.285384 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:19:08.293919 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:19:08.302352 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:19:08.302466 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:19:08.314481 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:19:08.324318 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:19:08.324378 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:19:08.333925 1863329 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:19:08.343981 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:08.484856 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.407056 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.612649 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.691321 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.780355 1863329 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:19:09.780450 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:10.281441 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:10.780982 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:10.803824 1863329 api_server.go:72] duration metric: took 1.023465596s to wait for apiserver process to appear ...
	I0127 14:19:10.803860 1863329 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:19:10.803886 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:10.804578 1863329 api_server.go:269] stopped: https://192.168.72.17:8443/healthz: Get "https://192.168.72.17:8443/healthz": dial tcp 192.168.72.17:8443: connect: connection refused
	I0127 14:19:11.304934 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:11.906081 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:13.906183 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:13.554007 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:19:13.554040 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:19:13.554061 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:13.596380 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:19:13.596419 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:19:13.804894 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:13.819580 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:13.819610 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:14.304214 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:14.309598 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:14.309627 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:14.804236 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:14.809512 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:14.809551 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:15.304181 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:15.309590 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:15.309618 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:15.803958 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:15.813848 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:15.813901 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:16.304624 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:16.310313 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:16.310345 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:16.804590 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:16.809168 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 200:
	ok
	I0127 14:19:16.816088 1863329 api_server.go:141] control plane version: v1.32.1
	I0127 14:19:16.816123 1863329 api_server.go:131] duration metric: took 6.012253595s to wait for apiserver health ...
	I0127 14:19:16.816135 1863329 cni.go:84] Creating CNI manager for ""
	I0127 14:19:16.816144 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:19:16.817843 1863329 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:19:16.819038 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:19:16.829479 1863329 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:19:16.847164 1863329 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:19:16.857140 1863329 system_pods.go:59] 9 kube-system pods found
	I0127 14:19:16.857176 1863329 system_pods.go:61] "coredns-668d6bf9bc-f66f4" [b30ba9c6-eb6e-44c1-b389-96263bb405a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:16.857187 1863329 system_pods.go:61] "coredns-668d6bf9bc-pt6d2" [d8cf3c75-3646-40d8-8131-efd331e2cec7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:16.857198 1863329 system_pods.go:61] "etcd-newest-cni-309688" [f963f636-8186-4dd8-8263-b7bc29d15bc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 14:19:16.857210 1863329 system_pods.go:61] "kube-apiserver-newest-cni-309688" [90584ec0-2731-48e1-a2e6-5bef7b170386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 14:19:16.857219 1863329 system_pods.go:61] "kube-controller-manager-newest-cni-309688" [ef3b3e37-08d8-48aa-a55e-55d4c87c8189] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 14:19:16.857227 1863329 system_pods.go:61] "kube-proxy-8mwp9" [ebb658f3-eba2-4743-94cd-da996046bd02] Running
	I0127 14:19:16.857236 1863329 system_pods.go:61] "kube-scheduler-newest-cni-309688" [07bcbbe9-474a-4bc2-9f58-f889fa685754] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 14:19:16.857263 1863329 system_pods.go:61] "metrics-server-f79f97bbb-jw4m9" [85929ac8-142c-4bc7-90da-5c13f9ff3c0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:19:16.857277 1863329 system_pods.go:61] "storage-provisioner" [94820502-acf5-4297-8fc9-d4b4953b01ab] Running
	I0127 14:19:16.857287 1863329 system_pods.go:74] duration metric: took 10.102454ms to wait for pod list to return data ...
	I0127 14:19:16.857300 1863329 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:19:16.860835 1863329 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:19:16.860862 1863329 node_conditions.go:123] node cpu capacity is 2
	I0127 14:19:16.860886 1863329 node_conditions.go:105] duration metric: took 3.575582ms to run NodePressure ...
	I0127 14:19:16.860913 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:17.133479 1863329 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:19:17.144656 1863329 ops.go:34] apiserver oom_adj: -16
	I0127 14:19:17.144684 1863329 kubeadm.go:597] duration metric: took 9.013524206s to restartPrimaryControlPlane
	I0127 14:19:17.144695 1863329 kubeadm.go:394] duration metric: took 9.075390076s to StartCluster
	I0127 14:19:17.144715 1863329 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:17.144810 1863329 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:19:17.146498 1863329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:17.146819 1863329 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 14:19:17.146906 1863329 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:19:17.147019 1863329 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-309688"
	I0127 14:19:17.147042 1863329 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-309688"
	I0127 14:19:17.147041 1863329 addons.go:69] Setting default-storageclass=true in profile "newest-cni-309688"
	W0127 14:19:17.147054 1863329 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:19:17.147075 1863329 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-309688"
	I0127 14:19:17.147081 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:19:17.147079 1863329 addons.go:69] Setting dashboard=true in profile "newest-cni-309688"
	I0127 14:19:17.147063 1863329 addons.go:69] Setting metrics-server=true in profile "newest-cni-309688"
	I0127 14:19:17.147150 1863329 addons.go:238] Setting addon metrics-server=true in "newest-cni-309688"
	W0127 14:19:17.147164 1863329 addons.go:247] addon metrics-server should already be in state true
	I0127 14:19:17.147190 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.147088 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.147127 1863329 addons.go:238] Setting addon dashboard=true in "newest-cni-309688"
	W0127 14:19:17.147431 1863329 addons.go:247] addon dashboard should already be in state true
	I0127 14:19:17.147463 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.147523 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147558 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147565 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.147607 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.147687 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147718 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.147797 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147810 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.148440 1863329 out.go:177] * Verifying Kubernetes components...
	I0127 14:19:17.149687 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:17.163903 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0127 14:19:17.164136 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0127 14:19:17.164313 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.164874 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.165122 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.165143 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.165396 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.165415 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.165676 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.165822 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.165886 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.166471 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.166526 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.175217 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I0127 14:19:17.175873 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.176532 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.176558 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.176979 1863329 addons.go:238] Setting addon default-storageclass=true in "newest-cni-309688"
	I0127 14:19:17.176997 1863329 main.go:141] libmachine: () Calling .GetMachineName
	W0127 14:19:17.177002 1863329 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:19:17.177080 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.177500 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.177518 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.177541 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.177556 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.192916 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
	I0127 14:19:17.193458 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.194088 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.194110 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.194524 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.195179 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.195214 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.196238 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0127 14:19:17.196598 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.196918 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0127 14:19:17.197180 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.197200 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.197360 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.197480 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.197523 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35899
	I0127 14:19:17.197802 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.197813 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.198103 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.198164 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.198321 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.198535 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.198583 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.198888 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.198902 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.199305 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.199518 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.200369 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.201165 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.202593 1863329 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:19:17.202676 1863329 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:19:17.203794 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:19:17.203807 1863329 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:19:17.203824 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.203911 1863329 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:19:17.203926 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:19:17.203944 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.207477 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.207978 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.208029 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.208889 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.209077 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.209227 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.209363 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.216222 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.216592 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I0127 14:19:17.216814 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.216831 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.216961 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.217064 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.217256 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.217411 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.217422 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.217463 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.217578 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.217795 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.217839 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0127 14:19:17.218152 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.218203 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.218804 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.218816 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.219270 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.219480 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.219969 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.220954 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.221278 1863329 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:19:17.221291 1863329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:19:17.221312 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.221888 1863329 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:19:17.223572 1863329 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:19:17.225013 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:19:17.225038 1863329 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:19:17.225052 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.225188 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.225554 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.225777 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.225825 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.226023 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.226118 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.226242 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.228625 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.228937 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.228977 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.229171 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.229344 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.229536 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.229794 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.331878 1863329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:19:17.351919 1863329 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:19:17.352011 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:17.365611 1863329 api_server.go:72] duration metric: took 218.744274ms to wait for apiserver process to appear ...
	I0127 14:19:17.365637 1863329 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:19:17.365655 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:17.372023 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 200:
	ok
	I0127 14:19:17.373577 1863329 api_server.go:141] control plane version: v1.32.1
	I0127 14:19:17.373603 1863329 api_server.go:131] duration metric: took 7.959402ms to wait for apiserver health ...
	I0127 14:19:17.373612 1863329 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:19:17.382361 1863329 system_pods.go:59] 9 kube-system pods found
	I0127 14:19:17.382397 1863329 system_pods.go:61] "coredns-668d6bf9bc-f66f4" [b30ba9c6-eb6e-44c1-b389-96263bb405a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:17.382408 1863329 system_pods.go:61] "coredns-668d6bf9bc-pt6d2" [d8cf3c75-3646-40d8-8131-efd331e2cec7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:17.382420 1863329 system_pods.go:61] "etcd-newest-cni-309688" [f963f636-8186-4dd8-8263-b7bc29d15bc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 14:19:17.382430 1863329 system_pods.go:61] "kube-apiserver-newest-cni-309688" [90584ec0-2731-48e1-a2e6-5bef7b170386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 14:19:17.382453 1863329 system_pods.go:61] "kube-controller-manager-newest-cni-309688" [ef3b3e37-08d8-48aa-a55e-55d4c87c8189] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 14:19:17.382460 1863329 system_pods.go:61] "kube-proxy-8mwp9" [ebb658f3-eba2-4743-94cd-da996046bd02] Running
	I0127 14:19:17.382473 1863329 system_pods.go:61] "kube-scheduler-newest-cni-309688" [07bcbbe9-474a-4bc2-9f58-f889fa685754] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 14:19:17.382480 1863329 system_pods.go:61] "metrics-server-f79f97bbb-jw4m9" [85929ac8-142c-4bc7-90da-5c13f9ff3c0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:19:17.382486 1863329 system_pods.go:61] "storage-provisioner" [94820502-acf5-4297-8fc9-d4b4953b01ab] Running
	I0127 14:19:17.382496 1863329 system_pods.go:74] duration metric: took 8.875555ms to wait for pod list to return data ...
	I0127 14:19:17.382507 1863329 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:19:17.385289 1863329 default_sa.go:45] found service account: "default"
	I0127 14:19:17.385310 1863329 default_sa.go:55] duration metric: took 2.794486ms for default service account to be created ...
	I0127 14:19:17.385319 1863329 kubeadm.go:582] duration metric: took 238.459291ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 14:19:17.385341 1863329 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:19:17.388555 1863329 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:19:17.388583 1863329 node_conditions.go:123] node cpu capacity is 2
	I0127 14:19:17.388596 1863329 node_conditions.go:105] duration metric: took 3.249906ms to run NodePressure ...
	I0127 14:19:17.388610 1863329 start.go:241] waiting for startup goroutines ...
	I0127 14:19:17.418149 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:19:17.421312 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:19:17.421340 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:19:17.438395 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:19:17.454881 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:19:17.454907 1863329 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:19:17.463957 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:19:17.463983 1863329 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:19:17.511881 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:19:17.511918 1863329 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:19:17.526875 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:19:17.526902 1863329 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:19:17.564740 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:19:17.593901 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:19:17.593956 1863329 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:19:17.686229 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:19:17.686255 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:19:17.771605 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:19:17.771642 1863329 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:19:17.858960 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:19:17.858995 1863329 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:19:17.968615 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:19:17.968653 1863329 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:19:18.103281 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:19:18.103311 1863329 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:19:18.180707 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:19:18.180741 1863329 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:19:18.229422 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:19:19.526682 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.088226902s)
	I0127 14:19:19.526763 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.526777 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.526802 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962012351s)
	I0127 14:19:19.526851 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.526861 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.108674811s)
	I0127 14:19:19.526875 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.526891 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.526910 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.527161 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.527203 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.527212 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.527219 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.527227 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.528059 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528072 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528080 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.528088 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.528229 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528239 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528293 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.528342 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528349 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528356 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.528362 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.528502 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.528531 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528538 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528548 1863329 addons.go:479] Verifying addon metrics-server=true in "newest-cni-309688"
	I0127 14:19:19.528986 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.529006 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.529009 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.552242 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.552274 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.552631 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.552650 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.709148 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.47964575s)
	I0127 14:19:19.709210 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.709226 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.709584 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.709606 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.709613 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.709610 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.709620 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.709911 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.709925 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.711462 1863329 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-309688 addons enable metrics-server
	
	I0127 14:19:19.712846 1863329 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0127 14:19:19.714093 1863329 addons.go:514] duration metric: took 2.567193619s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0127 14:19:19.714146 1863329 start.go:246] waiting for cluster config update ...
	I0127 14:19:19.714163 1863329 start.go:255] writing updated cluster config ...
	I0127 14:19:19.714515 1863329 ssh_runner.go:195] Run: rm -f paused
	I0127 14:19:19.771292 1863329 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:19:19.773125 1863329 out.go:177] * Done! kubectl is now configured to use "newest-cni-309688" cluster and "default" namespace by default
	I0127 14:19:16.407410 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:18.408328 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:20.905706 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:22.906390 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:25.405847 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:27.406081 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:29.406653 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:31.905101 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:33.906032 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:36.406416 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:38.905541 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:41.405451 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:43.405883 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:45.905497 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:47.905917 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:50.405296 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:52.405989 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:54.905953 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:56.906021 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:58.906598 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:01.405909 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:03.406128 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:05.906092 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:08.405216 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:10.405449 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:12.905583 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:14.399935 1860751 pod_ready.go:82] duration metric: took 4m0.000530283s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" ...
	E0127 14:20:14.399966 1860751 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 14:20:14.399992 1860751 pod_ready.go:39] duration metric: took 4m31.410913398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:14.400032 1860751 kubeadm.go:597] duration metric: took 5m29.594675564s to restartPrimaryControlPlane
	W0127 14:20:14.400141 1860751 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 14:20:14.400175 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 14:20:15.909704 1860751 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.509493932s)
	I0127 14:20:15.909782 1860751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:20:15.925857 1860751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:20:15.935803 1860751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:20:15.946508 1860751 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:20:15.946527 1860751 kubeadm.go:157] found existing configuration files:
	
	I0127 14:20:15.946566 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 14:20:15.956633 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:20:15.956690 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:20:15.966965 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 14:20:15.984740 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:20:15.984801 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:20:15.995541 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 14:20:16.005543 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:20:16.005605 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:20:16.015855 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 14:20:16.025594 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:20:16.025640 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:20:16.035989 1860751 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:20:16.197395 1860751 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:20:24.074171 1860751 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:20:24.074259 1860751 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:20:24.074369 1860751 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:20:24.074528 1860751 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:20:24.074657 1860751 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:20:24.074731 1860751 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:20:24.076292 1860751 out.go:235]   - Generating certificates and keys ...
	I0127 14:20:24.076373 1860751 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:20:24.076450 1860751 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:20:24.076532 1860751 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:20:24.076585 1860751 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:20:24.076644 1860751 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:20:24.076713 1860751 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:20:24.076800 1860751 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:20:24.076884 1860751 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:20:24.076992 1860751 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:20:24.077103 1860751 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:20:24.077169 1860751 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:20:24.077243 1860751 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:20:24.077289 1860751 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:20:24.077349 1860751 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:20:24.077397 1860751 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:20:24.077468 1860751 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:20:24.077537 1860751 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:20:24.077610 1860751 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:20:24.077669 1860751 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:20:24.078852 1860751 out.go:235]   - Booting up control plane ...
	I0127 14:20:24.078965 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:20:24.079055 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:20:24.079140 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:20:24.079285 1860751 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:20:24.079429 1860751 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:20:24.079489 1860751 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:20:24.079690 1860751 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:20:24.079833 1860751 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:20:24.079921 1860751 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.61135ms
	I0127 14:20:24.080007 1860751 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:20:24.080110 1860751 kubeadm.go:310] [api-check] The API server is healthy after 5.001239504s
	I0127 14:20:24.080256 1860751 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:20:24.080387 1860751 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:20:24.080441 1860751 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:20:24.080637 1860751 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-212529 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:20:24.080711 1860751 kubeadm.go:310] [bootstrap-token] Using token: pxjq5d.hk6ws8nooc0hkr03
	I0127 14:20:24.082018 1860751 out.go:235]   - Configuring RBAC rules ...
	I0127 14:20:24.082176 1860751 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:20:24.082314 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:20:24.082518 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:20:24.082703 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:20:24.082889 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:20:24.083015 1860751 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:20:24.083173 1860751 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:20:24.083250 1860751 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:20:24.083301 1860751 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:20:24.083311 1860751 kubeadm.go:310] 
	I0127 14:20:24.083396 1860751 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:20:24.083407 1860751 kubeadm.go:310] 
	I0127 14:20:24.083513 1860751 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:20:24.083522 1860751 kubeadm.go:310] 
	I0127 14:20:24.083558 1860751 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:20:24.083655 1860751 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:20:24.083734 1860751 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:20:24.083743 1860751 kubeadm.go:310] 
	I0127 14:20:24.083802 1860751 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:20:24.083810 1860751 kubeadm.go:310] 
	I0127 14:20:24.083852 1860751 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:20:24.083858 1860751 kubeadm.go:310] 
	I0127 14:20:24.083921 1860751 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:20:24.084043 1860751 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:20:24.084140 1860751 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:20:24.084149 1860751 kubeadm.go:310] 
	I0127 14:20:24.084263 1860751 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:20:24.084383 1860751 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:20:24.084400 1860751 kubeadm.go:310] 
	I0127 14:20:24.084497 1860751 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token pxjq5d.hk6ws8nooc0hkr03 \
	I0127 14:20:24.084584 1860751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e \
	I0127 14:20:24.084604 1860751 kubeadm.go:310] 	--control-plane 
	I0127 14:20:24.084610 1860751 kubeadm.go:310] 
	I0127 14:20:24.084679 1860751 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:20:24.084685 1860751 kubeadm.go:310] 
	I0127 14:20:24.084750 1860751 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pxjq5d.hk6ws8nooc0hkr03 \
	I0127 14:20:24.084894 1860751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e 
	I0127 14:20:24.084923 1860751 cni.go:84] Creating CNI manager for ""
	I0127 14:20:24.084937 1860751 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:20:24.086257 1860751 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:20:24.087300 1860751 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:20:24.097744 1860751 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:20:24.115867 1860751 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:20:24.115958 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:24.115962 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-212529 minikube.k8s.io/updated_at=2025_01_27T14_20_24_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=default-k8s-diff-port-212529 minikube.k8s.io/primary=true
	I0127 14:20:24.324045 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:24.324042 1860751 ops.go:34] apiserver oom_adj: -16
	I0127 14:20:24.824528 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:25.324196 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:25.824971 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:26.324285 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:26.825007 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:27.324812 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:27.824252 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:28.324496 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:28.413845 1860751 kubeadm.go:1113] duration metric: took 4.297974897s to wait for elevateKubeSystemPrivileges
	I0127 14:20:28.413890 1860751 kubeadm.go:394] duration metric: took 5m43.681075591s to StartCluster
	I0127 14:20:28.413911 1860751 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:20:28.414029 1860751 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:20:28.416135 1860751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:20:28.416434 1860751 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.145 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 14:20:28.416580 1860751 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:20:28.416710 1860751 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416715 1860751 config.go:182] Loaded profile config "default-k8s-diff-port-212529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:20:28.416736 1860751 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.416745 1860751 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:20:28.416742 1860751 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416759 1860751 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416785 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.416797 1860751 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.416807 1860751 addons.go:247] addon dashboard should already be in state true
	I0127 14:20:28.416843 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.417198 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417233 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.417240 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417275 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.416772 1860751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-212529"
	I0127 14:20:28.416777 1860751 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.417322 1860751 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.417337 1860751 addons.go:247] addon metrics-server should already be in state true
	I0127 14:20:28.417560 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.417900 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417916 1860751 out.go:177] * Verifying Kubernetes components...
	I0127 14:20:28.417955 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.417963 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.418005 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.419061 1860751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:20:28.434949 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
	I0127 14:20:28.435505 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.436082 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.436114 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.436521 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.436752 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.437523 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0127 14:20:28.437697 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I0127 14:20:28.438072 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.438417 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.438657 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.438682 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.438906 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.438929 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.439056 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.439281 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.439489 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39481
	I0127 14:20:28.439624 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.439660 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.439804 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.439846 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.439944 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.440409 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.440432 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.440811 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.441377 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.441420 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.441785 1860751 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.441804 1860751 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:20:28.441836 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.442074 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.442111 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.460558 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
	I0127 14:20:28.461043 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38533
	I0127 14:20:28.461200 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.461461 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.461725 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.461749 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.461814 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0127 14:20:28.462061 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.462083 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.462286 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.462330 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.462485 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.462605 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.462762 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.462775 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.462832 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.463228 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.463817 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.463862 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.464659 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.465253 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.466108 1860751 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:20:28.466667 1860751 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:20:28.467300 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:20:28.467316 1860751 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:20:28.467333 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.469055 1860751 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:20:28.469287 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38797
	I0127 14:20:28.469629 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.470009 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:20:28.470027 1860751 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:20:28.470055 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.470158 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.470180 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.470774 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.470967 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.471164 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.471781 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.471814 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.472153 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.472327 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.472488 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.472639 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.473502 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.473853 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.474311 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.474338 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.474488 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.474652 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.474805 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.474896 1860751 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:20:28.474964 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.475898 1860751 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:20:28.475916 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:20:28.475933 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.478521 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.478927 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.478950 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.479131 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.479325 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.479479 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.479622 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.482246 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I0127 14:20:28.482637 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.483047 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.483068 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.483409 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.483542 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.484999 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.485241 1860751 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:20:28.485259 1860751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:20:28.485276 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.488061 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.488402 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.488429 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.488581 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.488725 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.488858 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.489030 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.646865 1860751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:20:28.672532 1860751 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-212529" to be "Ready" ...
	I0127 14:20:28.703176 1860751 node_ready.go:49] node "default-k8s-diff-port-212529" has status "Ready":"True"
	I0127 14:20:28.703197 1860751 node_ready.go:38] duration metric: took 30.636379ms for node "default-k8s-diff-port-212529" to be "Ready" ...
	I0127 14:20:28.703206 1860751 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:28.710494 1860751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:28.817820 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:20:28.817849 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:20:28.837871 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:20:28.851072 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:20:28.851107 1860751 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:20:28.852529 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:20:28.858946 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:20:28.858978 1860751 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:20:28.897376 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:20:28.897409 1860751 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:20:28.944458 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:20:28.944489 1860751 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:20:28.996770 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:20:28.996799 1860751 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:20:29.041836 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:20:29.066199 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:20:29.066234 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:20:29.191066 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:20:29.191092 1860751 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:20:29.292937 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:20:29.292970 1860751 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:20:29.324574 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:20:29.324605 1860751 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:20:29.381589 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:20:29.381618 1860751 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:20:29.579396 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:20:29.579421 1860751 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:20:29.730806 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:20:30.332634 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.480056609s)
	I0127 14:20:30.332719 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.332740 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.332753 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.494842628s)
	I0127 14:20:30.332799 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.332812 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333060 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333080 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.333120 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.333128 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333246 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333271 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.333280 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.333287 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333331 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Closing plugin on server side
	I0127 14:20:30.333499 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333513 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.335273 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.335291 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.402574 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.402607 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.402929 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.402951 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.597814 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.555933063s)
	I0127 14:20:30.597873 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.597890 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.598223 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.598244 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.598254 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.598262 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.598523 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.598545 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.598558 1860751 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-212529"
	I0127 14:20:30.720235 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:31.251992 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.52112686s)
	I0127 14:20:31.252076 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:31.252099 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:31.252456 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:31.252477 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:31.252487 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:31.252495 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:31.252788 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:31.252797 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Closing plugin on server side
	I0127 14:20:31.252810 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:31.254461 1860751 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-212529 addons enable metrics-server
	
	I0127 14:20:31.255681 1860751 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 14:20:31.256922 1860751 addons.go:514] duration metric: took 2.840355251s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 14:20:33.216592 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:35.217244 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:37.731702 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.731733 1860751 pod_ready.go:82] duration metric: took 9.021206919s for pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.731747 1860751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.761047 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.761074 1860751 pod_ready.go:82] duration metric: took 29.318136ms for pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.761084 1860751 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.772463 1860751 pod_ready.go:93] pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.772491 1860751 pod_ready.go:82] duration metric: took 11.399303ms for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.772504 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.780269 1860751 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.780294 1860751 pod_ready.go:82] duration metric: took 7.782307ms for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.780306 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.785276 1860751 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.785304 1860751 pod_ready.go:82] duration metric: took 4.986421ms for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.785315 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5fmd" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.114939 1860751 pod_ready.go:93] pod "kube-proxy-f5fmd" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:38.114969 1860751 pod_ready.go:82] duration metric: took 329.644964ms for pod "kube-proxy-f5fmd" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.114981 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.515806 1860751 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:38.515832 1860751 pod_ready.go:82] duration metric: took 400.844808ms for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.515841 1860751 pod_ready.go:39] duration metric: took 9.812625577s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:38.515859 1860751 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:20:38.515918 1860751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:20:38.534333 1860751 api_server.go:72] duration metric: took 10.117851719s to wait for apiserver process to appear ...
	I0127 14:20:38.534364 1860751 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:20:38.534390 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:20:38.540410 1860751 api_server.go:279] https://192.168.50.145:8444/healthz returned 200:
	ok
	I0127 14:20:38.541651 1860751 api_server.go:141] control plane version: v1.32.1
	I0127 14:20:38.541674 1860751 api_server.go:131] duration metric: took 7.30288ms to wait for apiserver health ...
	I0127 14:20:38.541685 1860751 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:20:38.725366 1860751 system_pods.go:59] 9 kube-system pods found
	I0127 14:20:38.725397 1860751 system_pods.go:61] "coredns-668d6bf9bc-g77l4" [4457b047-3339-455e-ab06-15a1e4d7a95f] Running
	I0127 14:20:38.725402 1860751 system_pods.go:61] "coredns-668d6bf9bc-gwfcp" [d557581e-b74a-482d-9c8c-12e1b51d11d5] Running
	I0127 14:20:38.725406 1860751 system_pods.go:61] "etcd-default-k8s-diff-port-212529" [1e347129-845b-4c34-831c-e056cccc90f7] Running
	I0127 14:20:38.725410 1860751 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-212529" [1472d317-bd0d-4957-a955-d69eb5339d2a] Running
	I0127 14:20:38.725414 1860751 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-212529" [0e5e7440-7389-4bc8-9ee5-0e8041edef25] Running
	I0127 14:20:38.725417 1860751 system_pods.go:61] "kube-proxy-f5fmd" [a08f6d90-467b-4972-8c03-d62d07e108e5] Running
	I0127 14:20:38.725422 1860751 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-212529" [34188644-73d6-4567-856a-895cef0abac8] Running
	I0127 14:20:38.725431 1860751 system_pods.go:61] "metrics-server-f79f97bbb-gpkgd" [ec65f4da-1a84-4dab-9969-3ed09e9fdce2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:20:38.725436 1860751 system_pods.go:61] "storage-provisioner" [72ed4f2a-f894-4246-8596-b02befc5fde4] Running
	I0127 14:20:38.725448 1860751 system_pods.go:74] duration metric: took 183.756587ms to wait for pod list to return data ...
	I0127 14:20:38.725461 1860751 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:20:38.916064 1860751 default_sa.go:45] found service account: "default"
	I0127 14:20:38.916100 1860751 default_sa.go:55] duration metric: took 190.628425ms for default service account to be created ...
	I0127 14:20:38.916114 1860751 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:20:39.121453 1860751 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	636b03523496e       523cad1a4df73       4 seconds ago       Exited              dashboard-metrics-scraper   9                   d1126f827120f       dashboard-metrics-scraper-86c6bf9756-r4mm5
	31515da1b3cf6       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   941ec0355a1bf       kubernetes-dashboard-7779f9b69b-9522q
	cb83160fc499f       e29f9c7391fd9       21 minutes ago      Running             kube-proxy                  0                   fac138c040172       kube-proxy-k69dv
	c90c075cf97ce       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   f2695b4afb41d       coredns-668d6bf9bc-lq9hg
	8c74623345af3       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   d099695525761       coredns-668d6bf9bc-cm66w
	3d05822f61ecc       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   525c66ad0f739       storage-provisioner
	b3764c5e0ee8c       a9e7e6b294baf       21 minutes ago      Running             etcd                        2                   0ee2c21c2c167       etcd-no-preload-591346
	60b8774e71443       019ee182b58e2       21 minutes ago      Running             kube-controller-manager     2                   3e875ffb5aa42       kube-controller-manager-no-preload-591346
	060e4f1e3f2d8       95c0bda56fc4d       21 minutes ago      Running             kube-apiserver              2                   afdc5a6b167cb       kube-apiserver-no-preload-591346
	3ec00cf8ef54b       2b0d6572d062c       21 minutes ago      Running             kube-scheduler              2                   a0d34062c9709       kube-scheduler-no-preload-591346
	
	
	==> containerd <==
	Jan 27 14:34:23 no-preload-591346 containerd[559]: time="2025-01-27T14:34:23.290710567Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 14:34:23 no-preload-591346 containerd[559]: time="2025-01-27T14:34:23.292519073Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 14:34:23 no-preload-591346 containerd[559]: time="2025-01-27T14:34:23.292552276Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 14:35:02 no-preload-591346 containerd[559]: time="2025-01-27T14:35:02.281761706Z" level=info msg="CreateContainer within sandbox \"d1126f827120fcbecaf9290de40dfd6b356138ff781f9af5c62ae3b0ce41a260\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 14:35:02 no-preload-591346 containerd[559]: time="2025-01-27T14:35:02.305027921Z" level=info msg="CreateContainer within sandbox \"d1126f827120fcbecaf9290de40dfd6b356138ff781f9af5c62ae3b0ce41a260\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690\""
	Jan 27 14:35:02 no-preload-591346 containerd[559]: time="2025-01-27T14:35:02.305821088Z" level=info msg="StartContainer for \"65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690\""
	Jan 27 14:35:02 no-preload-591346 containerd[559]: time="2025-01-27T14:35:02.367558551Z" level=info msg="StartContainer for \"65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690\" returns successfully"
	Jan 27 14:35:02 no-preload-591346 containerd[559]: time="2025-01-27T14:35:02.411294784Z" level=info msg="shim disconnected" id=65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690 namespace=k8s.io
	Jan 27 14:35:02 no-preload-591346 containerd[559]: time="2025-01-27T14:35:02.411515464Z" level=warning msg="cleaning up after shim disconnected" id=65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690 namespace=k8s.io
	Jan 27 14:35:02 no-preload-591346 containerd[559]: time="2025-01-27T14:35:02.411533521Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 14:35:02 no-preload-591346 containerd[559]: time="2025-01-27T14:35:02.773022113Z" level=info msg="RemoveContainer for \"d0cc283e326e763557d847522cb3b4147b13255526b50e5c896f5b3e682dfa3b\""
	Jan 27 14:35:02 no-preload-591346 containerd[559]: time="2025-01-27T14:35:02.781851788Z" level=info msg="RemoveContainer for \"d0cc283e326e763557d847522cb3b4147b13255526b50e5c896f5b3e682dfa3b\" returns successfully"
	Jan 27 14:39:31 no-preload-591346 containerd[559]: time="2025-01-27T14:39:31.280954264Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 14:39:31 no-preload-591346 containerd[559]: time="2025-01-27T14:39:31.289922246Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 14:39:31 no-preload-591346 containerd[559]: time="2025-01-27T14:39:31.291994484Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 14:39:31 no-preload-591346 containerd[559]: time="2025-01-27T14:39:31.292186850Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 14:40:03 no-preload-591346 containerd[559]: time="2025-01-27T14:40:03.281435150Z" level=info msg="CreateContainer within sandbox \"d1126f827120fcbecaf9290de40dfd6b356138ff781f9af5c62ae3b0ce41a260\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 14:40:03 no-preload-591346 containerd[559]: time="2025-01-27T14:40:03.308687315Z" level=info msg="CreateContainer within sandbox \"d1126f827120fcbecaf9290de40dfd6b356138ff781f9af5c62ae3b0ce41a260\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"636b03523496eb0ca8e094a5d2f02301b32e692e18d41b06b4283293a0c621c4\""
	Jan 27 14:40:03 no-preload-591346 containerd[559]: time="2025-01-27T14:40:03.309738997Z" level=info msg="StartContainer for \"636b03523496eb0ca8e094a5d2f02301b32e692e18d41b06b4283293a0c621c4\""
	Jan 27 14:40:03 no-preload-591346 containerd[559]: time="2025-01-27T14:40:03.374832392Z" level=info msg="StartContainer for \"636b03523496eb0ca8e094a5d2f02301b32e692e18d41b06b4283293a0c621c4\" returns successfully"
	Jan 27 14:40:03 no-preload-591346 containerd[559]: time="2025-01-27T14:40:03.418118106Z" level=info msg="shim disconnected" id=636b03523496eb0ca8e094a5d2f02301b32e692e18d41b06b4283293a0c621c4 namespace=k8s.io
	Jan 27 14:40:03 no-preload-591346 containerd[559]: time="2025-01-27T14:40:03.418199955Z" level=warning msg="cleaning up after shim disconnected" id=636b03523496eb0ca8e094a5d2f02301b32e692e18d41b06b4283293a0c621c4 namespace=k8s.io
	Jan 27 14:40:03 no-preload-591346 containerd[559]: time="2025-01-27T14:40:03.418216110Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 14:40:03 no-preload-591346 containerd[559]: time="2025-01-27T14:40:03.473537241Z" level=info msg="RemoveContainer for \"65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690\""
	Jan 27 14:40:03 no-preload-591346 containerd[559]: time="2025-01-27T14:40:03.484458586Z" level=info msg="RemoveContainer for \"65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690\" returns successfully"
	
	
	==> coredns [8c74623345af3638fa8ebb1a4bb1a42c5f9b62859b3feca0a43afb019d107896] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c90c075cf97ce5490abaf4cfcbe4ae4a3cf6b01331b87c3d98245d269669d9fd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-591346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-591346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d
	                    minikube.k8s.io/name=no-preload-591346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T14_18_34_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 14:18:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-591346
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:39:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:38:59 +0000   Mon, 27 Jan 2025 14:18:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:38:59 +0000   Mon, 27 Jan 2025 14:18:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:38:59 +0000   Mon, 27 Jan 2025 14:18:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:38:59 +0000   Mon, 27 Jan 2025 14:18:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    no-preload-591346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 76dd72511c7f40d68d00b5129f27193a
	  System UUID:                76dd7251-1c7f-40d6-8d00-b5129f27193a
	  Boot ID:                    a8e69ab9-d646-4055-bd1d-1b2647d61432
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-cm66w                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-lq9hg                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-591346                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-591346              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-591346     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-k69dv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-591346              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-fqckz                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-r4mm5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-9522q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-591346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-591346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-591346 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-591346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-591346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-591346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-591346 event: Registered Node no-preload-591346 in Controller
	
	
	==> dmesg <==
	[  +0.037304] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.900413] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.109227] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.614336] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.548761] systemd-fstab-generator[482]: Ignoring "noauto" option for root device
	[  +0.054422] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055218] systemd-fstab-generator[494]: Ignoring "noauto" option for root device
	[  +0.144647] systemd-fstab-generator[508]: Ignoring "noauto" option for root device
	[  +0.132352] systemd-fstab-generator[520]: Ignoring "noauto" option for root device
	[  +0.260893] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[Jan27 14:14] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +1.777790] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +5.599940] kauditd_printk_skb: 265 callbacks suppressed
	[  +7.369508] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.343632] kauditd_printk_skb: 28 callbacks suppressed
	[Jan27 14:18] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +1.435897] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.690197] systemd-fstab-generator[3451]: Ignoring "noauto" option for root device
	[  +0.116845] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.350809] systemd-fstab-generator[3556]: Ignoring "noauto" option for root device
	[  +0.101440] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.460007] kauditd_printk_skb: 108 callbacks suppressed
	[  +7.325515] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [b3764c5e0ee8cddc7ce16ebdef26fd5dc1e4a93c11106522b225cda64a07d5cc] <==
	{"level":"info","ts":"2025-01-27T14:18:53.522128Z","caller":"traceutil/trace.go:171","msg":"trace[351725465] linearizableReadLoop","detail":"{readStateIndex:557; appliedIndex:556; }","duration":"236.416024ms","start":"2025-01-27T14:18:53.285013Z","end":"2025-01-27T14:18:53.521429Z","steps":["trace[351725465] 'read index received'  (duration: 235.643558ms)","trace[351725465] 'applied index is now lower than readState.Index'  (duration: 771.406µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:18:53.525290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.112741ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-f79f97bbb-fqckz.181e92715b63af4d\" limit:1 ","response":"range_response_count:1 size:814"}
	{"level":"warn","ts":"2025-01-27T14:18:53.525308Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.937367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-f79f97bbb-fqckz\" limit:1 ","response":"range_response_count:1 size:4559"}
	{"level":"info","ts":"2025-01-27T14:18:53.525682Z","caller":"traceutil/trace.go:171","msg":"trace[1133534049] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-f79f97bbb-fqckz; range_end:; response_count:1; response_revision:542; }","duration":"240.686535ms","start":"2025-01-27T14:18:53.284980Z","end":"2025-01-27T14:18:53.525666Z","steps":["trace[1133534049] 'agreement among raft nodes before linearized reading'  (duration: 236.892135ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:18:53.525752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.491106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-01-27T14:18:53.525773Z","caller":"traceutil/trace.go:171","msg":"trace[1868715403] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:542; }","duration":"179.575998ms","start":"2025-01-27T14:18:53.346191Z","end":"2025-01-27T14:18:53.525767Z","steps":["trace[1868715403] 'agreement among raft nodes before linearized reading'  (duration: 179.476919ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:18:53.525635Z","caller":"traceutil/trace.go:171","msg":"trace[491107716] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-f79f97bbb-fqckz.181e92715b63af4d; range_end:; response_count:1; response_revision:542; }","duration":"237.022601ms","start":"2025-01-27T14:18:53.288556Z","end":"2025-01-27T14:18:53.525579Z","steps":["trace[491107716] 'agreement among raft nodes before linearized reading'  (duration: 234.104276ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:18:53.525968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.775192ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:18:53.525994Z","caller":"traceutil/trace.go:171","msg":"trace[1151975747] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:542; }","duration":"214.826372ms","start":"2025-01-27T14:18:53.311159Z","end":"2025-01-27T14:18:53.525985Z","steps":["trace[1151975747] 'agreement among raft nodes before linearized reading'  (duration: 214.780895ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:18:53.687398Z","caller":"traceutil/trace.go:171","msg":"trace[179814423] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"160.845369ms","start":"2025-01-27T14:18:53.526533Z","end":"2025-01-27T14:18:53.687378Z","steps":["trace[179814423] 'process raft request'  (duration: 153.738874ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:18:53.687536Z","caller":"traceutil/trace.go:171","msg":"trace[1522009240] linearizableReadLoop","detail":"{readStateIndex:560; appliedIndex:557; }","duration":"144.246944ms","start":"2025-01-27T14:18:53.543272Z","end":"2025-01-27T14:18:53.687519Z","steps":["trace[1522009240] 'read index received'  (duration: 137.009876ms)","trace[1522009240] 'applied index is now lower than readState.Index'  (duration: 7.23649ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:18:53.687683Z","caller":"traceutil/trace.go:171","msg":"trace[1052238496] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"122.102408ms","start":"2025-01-27T14:18:53.565574Z","end":"2025-01-27T14:18:53.687676Z","steps":["trace[1052238496] 'process raft request'  (duration: 121.888437ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:18:53.687764Z","caller":"traceutil/trace.go:171","msg":"trace[1653527391] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"151.86451ms","start":"2025-01-27T14:18:53.535894Z","end":"2025-01-27T14:18:53.687759Z","steps":["trace[1653527391] 'process raft request'  (duration: 151.482264ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:18:53.687785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.508524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:18:53.687808Z","caller":"traceutil/trace.go:171","msg":"trace[1344708987] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:546; }","duration":"144.789606ms","start":"2025-01-27T14:18:53.543012Z","end":"2025-01-27T14:18:53.687802Z","steps":["trace[1344708987] 'agreement among raft nodes before linearized reading'  (duration: 144.741753ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:18:53.687826Z","caller":"traceutil/trace.go:171","msg":"trace[1296131599] transaction","detail":"{read_only:false; response_revision:545; number_of_response:1; }","duration":"148.948389ms","start":"2025-01-27T14:18:53.538874Z","end":"2025-01-27T14:18:53.687822Z","steps":["trace[1296131599] 'process raft request'  (duration: 148.545986ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:28:29.494614Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":870}
	{"level":"info","ts":"2025-01-27T14:28:29.527280Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":870,"took":"31.237533ms","hash":1583232050,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2801664,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-01-27T14:28:29.527578Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1583232050,"revision":870,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T14:33:29.502249Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1121}
	{"level":"info","ts":"2025-01-27T14:33:29.507305Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1121,"took":"4.359182ms","hash":1708074492,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1744896,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T14:33:29.507422Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1708074492,"revision":1121,"compact-revision":870}
	{"level":"info","ts":"2025-01-27T14:38:29.514030Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1372}
	{"level":"info","ts":"2025-01-27T14:38:29.518725Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1372,"took":"3.719287ms","hash":1703961378,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1732608,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T14:38:29.518992Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1703961378,"revision":1372,"compact-revision":1121}
	
	
	==> kernel <==
	 14:40:08 up 26 min,  0 users,  load average: 0.00, 0.05, 0.09
	Linux no-preload-591346 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [060e4f1e3f2d885e49a98534a24ce064317682c103573c5fd9d19ed5e73dc081] <==
	I0127 14:36:31.903831       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 14:36:31.904966       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 14:38:30.901468       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:38:30.901690       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 14:38:31.903852       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:38:31.903931       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 14:38:31.904155       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:38:31.904272       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 14:38:31.905081       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 14:38:31.906383       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 14:39:31.905463       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:39:31.905584       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 14:39:31.906649       1 handler_proxy.go:99] no RequestInfo found in the context
	I0127 14:39:31.906672       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0127 14:39:31.906828       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 14:39:31.908802       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [60b8774e71443ad582c317e9b8862cb974ce73e596d84900c60dc2386c0e606d] <==
	E0127 14:35:08.694507       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:35:08.760082       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:35:38.703049       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:35:38.767250       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:36:08.709237       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:36:08.774919       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:36:38.717373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:36:38.783956       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:37:08.723473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:37:08.795174       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:37:38.731150       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:37:38.802943       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:38:08.737802       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:38:08.810923       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:38:38.743196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:38:38.818763       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 14:38:59.689940       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-591346"
	E0127 14:39:08.749424       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:39:08.832020       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:39:38.757300       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:39:38.840251       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 14:39:44.297701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="446.39µs"
	I0127 14:39:57.294575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="273.415µs"
	I0127 14:40:03.487273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="109.233µs"
	I0127 14:40:06.052894       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="59.572µs"
	
	
	==> kube-proxy [cb83160fc499f63d28e7e9df93c5215eb150015b8b1f22567c30f1d070f61523] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:18:42.039574       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:18:42.049934       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.238"]
	E0127 14:18:42.050282       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:18:42.085201       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:18:42.085418       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:18:42.085520       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:18:42.087840       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:18:42.088184       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:18:42.088500       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:18:42.090205       1 config.go:199] "Starting service config controller"
	I0127 14:18:42.090617       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:18:42.090681       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:18:42.090733       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:18:42.093473       1 config.go:329] "Starting node config controller"
	I0127 14:18:42.093533       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:18:42.190798       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:18:42.190825       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:18:42.193696       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3ec00cf8ef54bdea46d619f435308b319fd0c354c092a821571e85da5eeabf53] <==
	W0127 14:18:31.822863       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 14:18:31.823171       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:31.857005       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 14:18:31.857068       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:31.865155       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 14:18:31.865245       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:31.871489       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 14:18:31.871545       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:31.907229       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 14:18:31.907297       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:31.909882       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 14:18:31.909932       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:32.023895       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 14:18:32.023949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:32.048725       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 14:18:32.048781       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:32.064675       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 14:18:32.064725       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:32.225507       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 14:18:32.225573       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:32.246813       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 14:18:32.246884       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:18:32.430594       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 14:18:32.430859       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 14:18:34.557504       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:39:09 no-preload-591346 kubelet[3457]: E0127 14:39:09.278894    3457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-r4mm5_kubernetes-dashboard(81d68522-37fb-41f1-ab46-7dc94deb3fbf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-r4mm5" podUID="81d68522-37fb-41f1-ab46-7dc94deb3fbf"
	Jan 27 14:39:19 no-preload-591346 kubelet[3457]: E0127 14:39:19.279056    3457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-fqckz" podUID="30644e2b-7988-4b55-aa94-fe774b820ed4"
	Jan 27 14:39:22 no-preload-591346 kubelet[3457]: I0127 14:39:22.278236    3457 scope.go:117] "RemoveContainer" containerID="65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690"
	Jan 27 14:39:22 no-preload-591346 kubelet[3457]: E0127 14:39:22.279046    3457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-r4mm5_kubernetes-dashboard(81d68522-37fb-41f1-ab46-7dc94deb3fbf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-r4mm5" podUID="81d68522-37fb-41f1-ab46-7dc94deb3fbf"
	Jan 27 14:39:31 no-preload-591346 kubelet[3457]: E0127 14:39:31.292462    3457 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 14:39:31 no-preload-591346 kubelet[3457]: E0127 14:39:31.292918    3457 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 14:39:31 no-preload-591346 kubelet[3457]: E0127 14:39:31.293295    3457 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwbjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-fqckz_kube-system(30644e2b-7988-4b55-aa94-fe774b820ed4): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 14:39:31 no-preload-591346 kubelet[3457]: E0127 14:39:31.294669    3457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-fqckz" podUID="30644e2b-7988-4b55-aa94-fe774b820ed4"
	Jan 27 14:39:34 no-preload-591346 kubelet[3457]: E0127 14:39:34.301713    3457 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 14:39:34 no-preload-591346 kubelet[3457]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 14:39:34 no-preload-591346 kubelet[3457]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 14:39:34 no-preload-591346 kubelet[3457]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 14:39:34 no-preload-591346 kubelet[3457]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 14:39:37 no-preload-591346 kubelet[3457]: I0127 14:39:37.278752    3457 scope.go:117] "RemoveContainer" containerID="65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690"
	Jan 27 14:39:37 no-preload-591346 kubelet[3457]: E0127 14:39:37.279036    3457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-r4mm5_kubernetes-dashboard(81d68522-37fb-41f1-ab46-7dc94deb3fbf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-r4mm5" podUID="81d68522-37fb-41f1-ab46-7dc94deb3fbf"
	Jan 27 14:39:44 no-preload-591346 kubelet[3457]: E0127 14:39:44.279719    3457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-fqckz" podUID="30644e2b-7988-4b55-aa94-fe774b820ed4"
	Jan 27 14:39:50 no-preload-591346 kubelet[3457]: I0127 14:39:50.279662    3457 scope.go:117] "RemoveContainer" containerID="65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690"
	Jan 27 14:39:50 no-preload-591346 kubelet[3457]: E0127 14:39:50.279841    3457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-r4mm5_kubernetes-dashboard(81d68522-37fb-41f1-ab46-7dc94deb3fbf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-r4mm5" podUID="81d68522-37fb-41f1-ab46-7dc94deb3fbf"
	Jan 27 14:39:57 no-preload-591346 kubelet[3457]: E0127 14:39:57.278528    3457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-fqckz" podUID="30644e2b-7988-4b55-aa94-fe774b820ed4"
	Jan 27 14:40:03 no-preload-591346 kubelet[3457]: I0127 14:40:03.278179    3457 scope.go:117] "RemoveContainer" containerID="65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690"
	Jan 27 14:40:03 no-preload-591346 kubelet[3457]: I0127 14:40:03.470537    3457 scope.go:117] "RemoveContainer" containerID="65910c28cd38ba62cab6323efad7a760074558b9bef7c7cced56225fe1e4c690"
	Jan 27 14:40:03 no-preload-591346 kubelet[3457]: I0127 14:40:03.470903    3457 scope.go:117] "RemoveContainer" containerID="636b03523496eb0ca8e094a5d2f02301b32e692e18d41b06b4283293a0c621c4"
	Jan 27 14:40:03 no-preload-591346 kubelet[3457]: E0127 14:40:03.471109    3457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-r4mm5_kubernetes-dashboard(81d68522-37fb-41f1-ab46-7dc94deb3fbf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-r4mm5" podUID="81d68522-37fb-41f1-ab46-7dc94deb3fbf"
	Jan 27 14:40:06 no-preload-591346 kubelet[3457]: I0127 14:40:06.037198    3457 scope.go:117] "RemoveContainer" containerID="636b03523496eb0ca8e094a5d2f02301b32e692e18d41b06b4283293a0c621c4"
	Jan 27 14:40:06 no-preload-591346 kubelet[3457]: E0127 14:40:06.037776    3457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-r4mm5_kubernetes-dashboard(81d68522-37fb-41f1-ab46-7dc94deb3fbf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-r4mm5" podUID="81d68522-37fb-41f1-ab46-7dc94deb3fbf"
	
	
	==> kubernetes-dashboard [31515da1b3cf6718d2960cdf915b84680e58bc29df0ea1595ca2a5e675af341a] <==
	2025/01/27 14:27:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:28:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:28:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:29:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:29:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:30:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:30:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:31:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:31:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:32:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:32:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:33:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:33:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:34:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:34:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:35:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:35:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:36:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:36:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:37:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:37:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:38:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:38:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:39:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:39:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3d05822f61eccc80c082d7b20fb311a264430cde784463036114ea7beda213a8] <==
	I0127 14:18:41.052149       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:18:41.089195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:18:41.089257       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:18:41.106593       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:18:41.106728       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-591346_b4d4464d-863f-48b2-b6d2-d3361f9e064d!
	I0127 14:18:41.112838       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3b1d1f58-6f07-4258-bf58-d0f42fb650a5", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-591346_b4d4464d-863f-48b2-b6d2-d3361f9e064d became leader
	I0127 14:18:41.207521       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-591346_b4d4464d-863f-48b2-b6d2-d3361f9e064d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-591346 -n no-preload-591346
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-591346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-fqckz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-591346 describe pod metrics-server-f79f97bbb-fqckz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-591346 describe pod metrics-server-f79f97bbb-fqckz: exit status 1 (62.487827ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-fqckz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-591346 describe pod metrics-server-f79f97bbb-fqckz: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1589.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1590.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-212529 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 14:14:22.899394 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:26.080269 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-212529 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m28.996064355s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-212529] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-212529" primary control-plane node in "default-k8s-diff-port-212529" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-212529" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-212529 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:14:20.833354 1860751 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:14:20.833460 1860751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:14:20.833472 1860751 out.go:358] Setting ErrFile to fd 2...
	I0127 14:14:20.833479 1860751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:14:20.833651 1860751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 14:14:20.834215 1860751 out.go:352] Setting JSON to false
	I0127 14:14:20.835265 1860751 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":39402,"bootTime":1737947859,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:14:20.835381 1860751 start.go:139] virtualization: kvm guest
	I0127 14:14:20.837426 1860751 out.go:177] * [default-k8s-diff-port-212529] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:14:20.838784 1860751 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:14:20.838843 1860751 notify.go:220] Checking for updates...
	I0127 14:14:20.840999 1860751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:14:20.842211 1860751 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:14:20.844006 1860751 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 14:14:20.845316 1860751 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:14:20.846735 1860751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:14:20.848718 1860751 config.go:182] Loaded profile config "default-k8s-diff-port-212529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:14:20.849326 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:14:20.849416 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:14:20.865362 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44417
	I0127 14:14:20.865816 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:14:20.866554 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:14:20.866578 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:14:20.866983 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:14:20.867160 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:14:20.867422 1860751 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:14:20.867708 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:14:20.867747 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:14:20.882265 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0127 14:14:20.882644 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:14:20.883078 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:14:20.883105 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:14:20.883422 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:14:20.883618 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:14:20.922238 1860751 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:14:20.923322 1860751 start.go:297] selected driver: kvm2
	I0127 14:14:20.923342 1860751 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-212529 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-212529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.145 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:14:20.923470 1860751 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:14:20.924208 1860751 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:14:20.924308 1860751 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-1798877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:14:20.938924 1860751 install.go:137] /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:14:20.939443 1860751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:14:20.939488 1860751 cni.go:84] Creating CNI manager for ""
	I0127 14:14:20.939557 1860751 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:14:20.939605 1860751 start.go:340] cluster config:
	{Name:default-k8s-diff-port-212529 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-212529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.145 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:14:20.939745 1860751 iso.go:125] acquiring lock: {Name:mk3326e4e64b9d95edc1453384276c21a2957c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:14:20.942012 1860751 out.go:177] * Starting "default-k8s-diff-port-212529" primary control-plane node in "default-k8s-diff-port-212529" cluster
	I0127 14:14:20.943002 1860751 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:14:20.943044 1860751 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 14:14:20.943058 1860751 cache.go:56] Caching tarball of preloaded images
	I0127 14:14:20.943161 1860751 preload.go:172] Found /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 14:14:20.943174 1860751 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 14:14:20.943287 1860751 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/default-k8s-diff-port-212529/config.json ...
	I0127 14:14:20.943489 1860751 start.go:360] acquireMachinesLock for default-k8s-diff-port-212529: {Name:mk6fcac41a7a21b211b65e56994e625852d1a781 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:14:20.943542 1860751 start.go:364] duration metric: took 31.997µs to acquireMachinesLock for "default-k8s-diff-port-212529"
	I0127 14:14:20.943563 1860751 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:14:20.943579 1860751 fix.go:54] fixHost starting: 
	I0127 14:14:20.943884 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:14:20.943935 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:14:20.958456 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35207
	I0127 14:14:20.959045 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:14:20.959515 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:14:20.959539 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:14:20.959844 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:14:20.960070 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:14:20.960214 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:14:20.961853 1860751 fix.go:112] recreateIfNeeded on default-k8s-diff-port-212529: state=Stopped err=<nil>
	I0127 14:14:20.961885 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	W0127 14:14:20.962043 1860751 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:14:20.963885 1860751 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-212529" ...
	I0127 14:14:20.965048 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Start
	I0127 14:14:20.965225 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) starting domain...
	I0127 14:14:20.965245 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) ensuring networks are active...
	I0127 14:14:20.966057 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Ensuring network default is active
	I0127 14:14:20.966444 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Ensuring network mk-default-k8s-diff-port-212529 is active
	I0127 14:14:20.966839 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) getting domain XML...
	I0127 14:14:20.967685 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) creating domain...
	I0127 14:14:22.212095 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) waiting for IP...
	I0127 14:14:22.213130 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:22.213657 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:22.213693 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:22.213617 1860785 retry.go:31] will retry after 290.082683ms: waiting for domain to come up
	I0127 14:14:22.505269 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:22.505792 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:22.505829 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:22.505760 1860785 retry.go:31] will retry after 388.970033ms: waiting for domain to come up
	I0127 14:14:22.896458 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:22.897078 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:22.897112 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:22.897026 1860785 retry.go:31] will retry after 409.790368ms: waiting for domain to come up
	I0127 14:14:23.308673 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:23.309231 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:23.309263 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:23.309185 1860785 retry.go:31] will retry after 380.00329ms: waiting for domain to come up
	I0127 14:14:23.690777 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:23.691425 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:23.691475 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:23.691397 1860785 retry.go:31] will retry after 548.145023ms: waiting for domain to come up
	I0127 14:14:24.241137 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:24.241714 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:24.241747 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:24.241673 1860785 retry.go:31] will retry after 726.988793ms: waiting for domain to come up
	I0127 14:14:24.970395 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:24.970872 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:24.970906 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:24.970834 1860785 retry.go:31] will retry after 894.28514ms: waiting for domain to come up
	I0127 14:14:25.867229 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:25.867766 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:25.867789 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:25.867742 1860785 retry.go:31] will retry after 1.43299941s: waiting for domain to come up
	I0127 14:14:27.302927 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:27.303463 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:27.303495 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:27.303430 1860785 retry.go:31] will retry after 1.583360022s: waiting for domain to come up
	I0127 14:14:28.889364 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:28.889886 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:28.889940 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:28.889868 1860785 retry.go:31] will retry after 2.170780774s: waiting for domain to come up
	I0127 14:14:31.062958 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:31.063573 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:31.063608 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:31.063525 1860785 retry.go:31] will retry after 1.869539145s: waiting for domain to come up
	I0127 14:14:32.934276 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:32.934711 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:32.934756 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:32.934678 1860785 retry.go:31] will retry after 3.367759001s: waiting for domain to come up
	I0127 14:14:36.303772 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:36.304395 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | unable to find current IP address of domain default-k8s-diff-port-212529 in network mk-default-k8s-diff-port-212529
	I0127 14:14:36.304431 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | I0127 14:14:36.304350 1860785 retry.go:31] will retry after 3.903592902s: waiting for domain to come up
	I0127 14:14:40.212328 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.212834 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) found domain IP: 192.168.50.145
	I0127 14:14:40.212869 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) reserving static IP address...
	I0127 14:14:40.212878 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has current primary IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.213303 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-212529", mac: "52:54:00:b1:8f:73", ip: "192.168.50.145"} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:40.213329 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | skip adding static IP to network mk-default-k8s-diff-port-212529 - found existing host DHCP lease matching {name: "default-k8s-diff-port-212529", mac: "52:54:00:b1:8f:73", ip: "192.168.50.145"}
	I0127 14:14:40.213346 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) reserved static IP address 192.168.50.145 for domain default-k8s-diff-port-212529
	I0127 14:14:40.213357 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Getting to WaitForSSH function...
	I0127 14:14:40.213368 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) waiting for SSH...
	I0127 14:14:40.215623 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.215986 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:40.216022 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.216145 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Using SSH client type: external
	I0127 14:14:40.216173 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa (-rw-------)
	I0127 14:14:40.216194 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:14:40.216204 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | About to run SSH command:
	I0127 14:14:40.216213 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | exit 0
	I0127 14:14:40.338499 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | SSH cmd err, output: <nil>: 
	I0127 14:14:40.338868 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetConfigRaw
	I0127 14:14:40.339569 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetIP
	I0127 14:14:40.342102 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.342429 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:40.342541 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.342692 1860751 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/default-k8s-diff-port-212529/config.json ...
	I0127 14:14:40.342930 1860751 machine.go:93] provisionDockerMachine start ...
	I0127 14:14:40.342949 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:14:40.343157 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:14:40.345510 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.345819 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:40.345841 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.345978 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:14:40.346169 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:40.346341 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:40.346483 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:14:40.346649 1860751 main.go:141] libmachine: Using SSH client type: native
	I0127 14:14:40.346935 1860751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.145 22 <nil> <nil>}
	I0127 14:14:40.346953 1860751 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:14:40.450701 1860751 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 14:14:40.450736 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetMachineName
	I0127 14:14:40.451017 1860751 buildroot.go:166] provisioning hostname "default-k8s-diff-port-212529"
	I0127 14:14:40.451054 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetMachineName
	I0127 14:14:40.451272 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:14:40.454069 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.454380 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:40.454420 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.454561 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:14:40.454712 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:40.454883 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:40.455011 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:14:40.455178 1860751 main.go:141] libmachine: Using SSH client type: native
	I0127 14:14:40.455378 1860751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.145 22 <nil> <nil>}
	I0127 14:14:40.455395 1860751 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-212529 && echo "default-k8s-diff-port-212529" | sudo tee /etc/hostname
	I0127 14:14:40.572058 1860751 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-212529
	
	I0127 14:14:40.572097 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:14:40.574801 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.575141 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:40.575173 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.575301 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:14:40.575514 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:40.575673 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:40.575838 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:14:40.576028 1860751 main.go:141] libmachine: Using SSH client type: native
	I0127 14:14:40.576197 1860751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.145 22 <nil> <nil>}
	I0127 14:14:40.576214 1860751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-212529' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-212529/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-212529' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:14:40.690966 1860751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:14:40.691004 1860751 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-1798877/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-1798877/.minikube}
	I0127 14:14:40.691037 1860751 buildroot.go:174] setting up certificates
	I0127 14:14:40.691048 1860751 provision.go:84] configureAuth start
	I0127 14:14:40.691067 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetMachineName
	I0127 14:14:40.691391 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetIP
	I0127 14:14:40.694599 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.695039 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:40.695074 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.695230 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:14:40.697785 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.698117 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:40.698156 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.698287 1860751 provision.go:143] copyHostCerts
	I0127 14:14:40.698338 1860751 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem, removing ...
	I0127 14:14:40.698357 1860751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem
	I0127 14:14:40.698421 1860751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem (1078 bytes)
	I0127 14:14:40.698507 1860751 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem, removing ...
	I0127 14:14:40.698516 1860751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem
	I0127 14:14:40.698538 1860751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem (1123 bytes)
	I0127 14:14:40.698586 1860751 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem, removing ...
	I0127 14:14:40.698593 1860751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem
	I0127 14:14:40.698623 1860751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem (1675 bytes)
	I0127 14:14:40.698673 1860751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-212529 san=[127.0.0.1 192.168.50.145 default-k8s-diff-port-212529 localhost minikube]
	I0127 14:14:40.935740 1860751 provision.go:177] copyRemoteCerts
	I0127 14:14:40.935805 1860751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:14:40.935840 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:14:40.938274 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.938539 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:40.938573 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:40.938728 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:14:40.938993 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:40.939198 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:14:40.939424 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:14:41.020389 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:14:41.045258 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 14:14:41.067606 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:14:41.089573 1860751 provision.go:87] duration metric: took 398.50705ms to configureAuth
	I0127 14:14:41.089595 1860751 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:14:41.089823 1860751 config.go:182] Loaded profile config "default-k8s-diff-port-212529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:14:41.089841 1860751 machine.go:96] duration metric: took 746.898244ms to provisionDockerMachine
	I0127 14:14:41.089850 1860751 start.go:293] postStartSetup for "default-k8s-diff-port-212529" (driver="kvm2")
	I0127 14:14:41.089862 1860751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:14:41.089892 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:14:41.090182 1860751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:14:41.090210 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:14:41.093016 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:41.093337 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:41.093361 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:41.093513 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:14:41.093813 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:41.094063 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:14:41.094238 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:14:41.178428 1860751 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:14:41.182280 1860751 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:14:41.182310 1860751 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/addons for local assets ...
	I0127 14:14:41.182390 1860751 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/files for local assets ...
	I0127 14:14:41.182474 1860751 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem -> 18060702.pem in /etc/ssl/certs
	I0127 14:14:41.182570 1860751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:14:41.193240 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:14:41.217893 1860751 start.go:296] duration metric: took 128.027053ms for postStartSetup
	I0127 14:14:41.217935 1860751 fix.go:56] duration metric: took 20.27436324s for fixHost
	I0127 14:14:41.217964 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:14:41.220496 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:41.220829 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:41.220864 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:41.221010 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:14:41.221215 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:41.221386 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:41.221546 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:14:41.221694 1860751 main.go:141] libmachine: Using SSH client type: native
	I0127 14:14:41.221941 1860751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.145 22 <nil> <nil>}
	I0127 14:14:41.221959 1860751 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:14:41.327157 1860751 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987281.302786730
	
	I0127 14:14:41.327184 1860751 fix.go:216] guest clock: 1737987281.302786730
	I0127 14:14:41.327194 1860751 fix.go:229] Guest: 2025-01-27 14:14:41.30278673 +0000 UTC Remote: 2025-01-27 14:14:41.21793885 +0000 UTC m=+20.431860559 (delta=84.84788ms)
	I0127 14:14:41.327239 1860751 fix.go:200] guest clock delta is within tolerance: 84.84788ms
	I0127 14:14:41.327246 1860751 start.go:83] releasing machines lock for "default-k8s-diff-port-212529", held for 20.383691315s
	I0127 14:14:41.327288 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:14:41.327586 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetIP
	I0127 14:14:41.330843 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:41.331243 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:41.331272 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:41.331493 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:14:41.332031 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:14:41.332216 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:14:41.332302 1860751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:14:41.332362 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:14:41.332409 1860751 ssh_runner.go:195] Run: cat /version.json
	I0127 14:14:41.332439 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:14:41.335161 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:41.335412 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:41.335542 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:41.335570 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:41.335714 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:41.335739 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:41.335747 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:14:41.335955 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:14:41.335970 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:41.336100 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:14:41.336116 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:14:41.336298 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:14:41.336325 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:14:41.336447 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:14:41.415750 1860751 ssh_runner.go:195] Run: systemctl --version
	I0127 14:14:41.447892 1860751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:14:41.453640 1860751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:14:41.453720 1860751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:14:41.472018 1860751 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:14:41.472050 1860751 start.go:495] detecting cgroup driver to use...
	I0127 14:14:41.472139 1860751 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 14:14:41.506247 1860751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 14:14:41.521352 1860751 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:14:41.521409 1860751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:14:41.535338 1860751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:14:41.549353 1860751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:14:41.664937 1860751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:14:41.829223 1860751 docker.go:233] disabling docker service ...
	I0127 14:14:41.829338 1860751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:14:41.844344 1860751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:14:41.856506 1860751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:14:41.977302 1860751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:14:42.107664 1860751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:14:42.123148 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:14:42.140740 1860751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 14:14:42.151978 1860751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 14:14:42.164970 1860751 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 14:14:42.165039 1860751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 14:14:42.177007 1860751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:14:42.187527 1860751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 14:14:42.198957 1860751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:14:42.209307 1860751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:14:42.221648 1860751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 14:14:42.237505 1860751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 14:14:42.247541 1860751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 14:14:42.257317 1860751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:14:42.266023 1860751 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:14:42.266077 1860751 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:14:42.277411 1860751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:14:42.288215 1860751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:14:42.422645 1860751 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 14:14:42.463247 1860751 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 14:14:42.463338 1860751 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:14:42.467837 1860751 retry.go:31] will retry after 1.289731447s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 14:14:43.757853 1860751 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:14:43.763523 1860751 start.go:563] Will wait 60s for crictl version
	I0127 14:14:43.763620 1860751 ssh_runner.go:195] Run: which crictl
	I0127 14:14:43.767539 1860751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:14:43.808390 1860751 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 14:14:43.808477 1860751 ssh_runner.go:195] Run: containerd --version
	I0127 14:14:43.836055 1860751 ssh_runner.go:195] Run: containerd --version
	I0127 14:14:43.861786 1860751 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 14:14:43.863160 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetIP
	I0127 14:14:43.866056 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:43.866555 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:14:43.866592 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:14:43.866834 1860751 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 14:14:43.871428 1860751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:14:43.884911 1860751 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-212529 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-212
529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.145 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:14:43.885099 1860751 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:14:43.885177 1860751 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:14:43.921014 1860751 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:14:43.921041 1860751 containerd.go:534] Images already preloaded, skipping extraction
	I0127 14:14:43.921100 1860751 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:14:43.957809 1860751 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:14:43.957845 1860751 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:14:43.957857 1860751 kubeadm.go:934] updating node { 192.168.50.145 8444 v1.32.1 containerd true true} ...
	I0127 14:14:43.957978 1860751 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-212529 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-212529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:14:43.958042 1860751 ssh_runner.go:195] Run: sudo crictl info
	I0127 14:14:43.996310 1860751 cni.go:84] Creating CNI manager for ""
	I0127 14:14:43.996339 1860751 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:14:43.996355 1860751 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:14:43.996388 1860751 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.145 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-212529 NodeName:default-k8s-diff-port-212529 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:14:43.996524 1860751 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-212529"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.145"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.145"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:14:43.996621 1860751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:14:44.007631 1860751 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:14:44.007720 1860751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:14:44.018518 1860751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I0127 14:14:44.037742 1860751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:14:44.056200 1860751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2324 bytes)
	I0127 14:14:44.074071 1860751 ssh_runner.go:195] Run: grep 192.168.50.145	control-plane.minikube.internal$ /etc/hosts
	I0127 14:14:44.078213 1860751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:14:44.091177 1860751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:14:44.219048 1860751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:14:44.238166 1860751 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/default-k8s-diff-port-212529 for IP: 192.168.50.145
	I0127 14:14:44.238192 1860751 certs.go:194] generating shared ca certs ...
	I0127 14:14:44.238211 1860751 certs.go:226] acquiring lock for ca certs: {Name:mkc6b95fb3d2c0d0c7049cde446028a0d731f231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:14:44.238383 1860751 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key
	I0127 14:14:44.238451 1860751 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key
	I0127 14:14:44.238467 1860751 certs.go:256] generating profile certs ...
	I0127 14:14:44.238586 1860751 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/default-k8s-diff-port-212529/client.key
	I0127 14:14:44.238683 1860751 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/default-k8s-diff-port-212529/apiserver.key.6828fe8e
	I0127 14:14:44.238779 1860751 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/default-k8s-diff-port-212529/proxy-client.key
	I0127 14:14:44.239063 1860751 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem (1338 bytes)
	W0127 14:14:44.239118 1860751 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070_empty.pem, impossibly tiny 0 bytes
	I0127 14:14:44.239135 1860751 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:14:44.239171 1860751 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:14:44.239207 1860751 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:14:44.239241 1860751 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem (1675 bytes)
	I0127 14:14:44.239298 1860751 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:14:44.240219 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:14:44.275867 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:14:44.308386 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:14:44.338940 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 14:14:44.364933 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/default-k8s-diff-port-212529/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 14:14:44.399451 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/default-k8s-diff-port-212529/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 14:14:44.427965 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/default-k8s-diff-port-212529/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:14:44.456090 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/default-k8s-diff-port-212529/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 14:14:44.484965 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem --> /usr/share/ca-certificates/1806070.pem (1338 bytes)
	I0127 14:14:44.508499 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /usr/share/ca-certificates/18060702.pem (1708 bytes)
	I0127 14:14:44.533058 1860751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:14:44.556587 1860751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:14:44.575211 1860751 ssh_runner.go:195] Run: openssl version
	I0127 14:14:44.582986 1860751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:14:44.597004 1860751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:14:44.602474 1860751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:14:44.602544 1860751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:14:44.610053 1860751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:14:44.621477 1860751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806070.pem && ln -fs /usr/share/ca-certificates/1806070.pem /etc/ssl/certs/1806070.pem"
	I0127 14:14:44.632297 1860751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806070.pem
	I0127 14:14:44.637231 1860751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:10 /usr/share/ca-certificates/1806070.pem
	I0127 14:14:44.637289 1860751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806070.pem
	I0127 14:14:44.643153 1860751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806070.pem /etc/ssl/certs/51391683.0"
	I0127 14:14:44.656230 1860751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18060702.pem && ln -fs /usr/share/ca-certificates/18060702.pem /etc/ssl/certs/18060702.pem"
	I0127 14:14:44.667950 1860751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18060702.pem
	I0127 14:14:44.672649 1860751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:10 /usr/share/ca-certificates/18060702.pem
	I0127 14:14:44.672722 1860751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18060702.pem
	I0127 14:14:44.678419 1860751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18060702.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:14:44.690156 1860751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:14:44.696564 1860751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:14:44.702970 1860751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:14:44.708674 1860751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:14:44.714549 1860751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:14:44.720046 1860751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:14:44.725515 1860751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:14:44.732824 1860751 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-212529 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-212529
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.145 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:14:44.732963 1860751 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 14:14:44.733014 1860751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:14:44.775379 1860751 cri.go:89] found id: "ca97325aa0d55fc36133f86c849b9c9df7421fe266214cf3df7c45df98be27c5"
	I0127 14:14:44.775405 1860751 cri.go:89] found id: "72f738f2103e6f2eb08a52d5d6d8214cde9a35c1fbc4c16085404be73763615d"
	I0127 14:14:44.775409 1860751 cri.go:89] found id: "6bb734b035660072a10509de91b9e18f9f81e300b17b86ae1a69a813e2c745c7"
	I0127 14:14:44.775412 1860751 cri.go:89] found id: "324dfe36df0070b33c89ee9e1f04955c9fd912283dbf8d85724d9ba98ebb7ede"
	I0127 14:14:44.775416 1860751 cri.go:89] found id: "fa55e362be78357b6e336b120e74330f104fb1abc73de119bb5055ed705c2ddb"
	I0127 14:14:44.775419 1860751 cri.go:89] found id: "87de42719df6a1e1a2d18c7d93cb8e3f392275fe5abcff4cbb89f7e62ce0ba35"
	I0127 14:14:44.775422 1860751 cri.go:89] found id: "0f48e9bba75efec3de528bf7e7c716393d1a58445db9556b1d76f0637e33226c"
	I0127 14:14:44.775425 1860751 cri.go:89] found id: ""
	I0127 14:14:44.775472 1860751 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 14:14:44.794251 1860751 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T14:14:44Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 14:14:44.794331 1860751 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:14:44.805320 1860751 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 14:14:44.805348 1860751 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 14:14:44.805409 1860751 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 14:14:44.815409 1860751 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:14:44.816066 1860751 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-212529" does not appear in /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:14:44.816346 1860751 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-1798877/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-212529" cluster setting kubeconfig missing "default-k8s-diff-port-212529" context setting]
	I0127 14:14:44.816823 1860751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:14:44.818225 1860751 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 14:14:44.827642 1860751 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.145
	I0127 14:14:44.827679 1860751 kubeadm.go:1160] stopping kube-system containers ...
	I0127 14:14:44.827696 1860751 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 14:14:44.827758 1860751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:14:44.864596 1860751 cri.go:89] found id: "ca97325aa0d55fc36133f86c849b9c9df7421fe266214cf3df7c45df98be27c5"
	I0127 14:14:44.864640 1860751 cri.go:89] found id: "72f738f2103e6f2eb08a52d5d6d8214cde9a35c1fbc4c16085404be73763615d"
	I0127 14:14:44.864646 1860751 cri.go:89] found id: "6bb734b035660072a10509de91b9e18f9f81e300b17b86ae1a69a813e2c745c7"
	I0127 14:14:44.864651 1860751 cri.go:89] found id: "324dfe36df0070b33c89ee9e1f04955c9fd912283dbf8d85724d9ba98ebb7ede"
	I0127 14:14:44.864655 1860751 cri.go:89] found id: "fa55e362be78357b6e336b120e74330f104fb1abc73de119bb5055ed705c2ddb"
	I0127 14:14:44.864661 1860751 cri.go:89] found id: "87de42719df6a1e1a2d18c7d93cb8e3f392275fe5abcff4cbb89f7e62ce0ba35"
	I0127 14:14:44.864664 1860751 cri.go:89] found id: "0f48e9bba75efec3de528bf7e7c716393d1a58445db9556b1d76f0637e33226c"
	I0127 14:14:44.864668 1860751 cri.go:89] found id: ""
	I0127 14:14:44.864674 1860751 cri.go:252] Stopping containers: [ca97325aa0d55fc36133f86c849b9c9df7421fe266214cf3df7c45df98be27c5 72f738f2103e6f2eb08a52d5d6d8214cde9a35c1fbc4c16085404be73763615d 6bb734b035660072a10509de91b9e18f9f81e300b17b86ae1a69a813e2c745c7 324dfe36df0070b33c89ee9e1f04955c9fd912283dbf8d85724d9ba98ebb7ede fa55e362be78357b6e336b120e74330f104fb1abc73de119bb5055ed705c2ddb 87de42719df6a1e1a2d18c7d93cb8e3f392275fe5abcff4cbb89f7e62ce0ba35 0f48e9bba75efec3de528bf7e7c716393d1a58445db9556b1d76f0637e33226c]
	I0127 14:14:44.864745 1860751 ssh_runner.go:195] Run: which crictl
	I0127 14:14:44.868750 1860751 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 ca97325aa0d55fc36133f86c849b9c9df7421fe266214cf3df7c45df98be27c5 72f738f2103e6f2eb08a52d5d6d8214cde9a35c1fbc4c16085404be73763615d 6bb734b035660072a10509de91b9e18f9f81e300b17b86ae1a69a813e2c745c7 324dfe36df0070b33c89ee9e1f04955c9fd912283dbf8d85724d9ba98ebb7ede fa55e362be78357b6e336b120e74330f104fb1abc73de119bb5055ed705c2ddb 87de42719df6a1e1a2d18c7d93cb8e3f392275fe5abcff4cbb89f7e62ce0ba35 0f48e9bba75efec3de528bf7e7c716393d1a58445db9556b1d76f0637e33226c
	I0127 14:14:44.909169 1860751 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 14:14:44.925513 1860751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:14:44.935440 1860751 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:14:44.935461 1860751 kubeadm.go:157] found existing configuration files:
	
	I0127 14:14:44.935509 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 14:14:44.944784 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:14:44.944844 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:14:44.954707 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 14:14:44.965005 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:14:44.965091 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:14:44.974833 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 14:14:44.983819 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:14:44.983897 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:14:44.993100 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 14:14:45.001660 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:14:45.001719 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:14:45.011224 1860751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:14:45.021274 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:45.139556 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:46.215490 1860751 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075890735s)
	I0127 14:14:46.215523 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:46.426528 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:46.488489 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:14:46.566909 1860751 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:14:46.567014 1860751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:47.067075 1860751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:47.567752 1860751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:47.589037 1860751 api_server.go:72] duration metric: took 1.022125022s to wait for apiserver process to appear ...
	I0127 14:14:47.589079 1860751 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:14:47.589108 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:14:47.589638 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:14:48.089269 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:14:53.090994 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 14:14:53.091085 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:14:58.092341 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 14:14:58.092404 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:03.093106 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 14:15:03.093149 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:08.094823 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 14:15:08.094864 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:08.511548 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": read tcp 192.168.50.1:58476->192.168.50.145:8444: read: connection reset by peer
	I0127 14:15:08.589856 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:08.590498 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:09.089900 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:09.090506 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:09.589167 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:09.589751 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:10.089415 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:10.090092 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:10.589328 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:10.589945 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:11.090027 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:11.090617 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:11.589793 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:11.590431 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:12.089727 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:12.090441 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:12.590207 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:12.590800 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:13.090137 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:13.090793 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:13.589438 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:13.590039 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:14.089682 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:14.090244 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:14.589971 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:14.590578 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:15.090124 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:15.090714 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:15.590172 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:15.590693 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:16.089781 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:16.090349 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:16.589900 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:16.590462 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:17.090191 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:17.090828 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:17.590150 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:17.590901 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:18.089573 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:18.090195 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:18.589350 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:18.589985 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:19.089643 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:19.090263 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:19.589978 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:19.590685 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:20.090005 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:20.090578 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:20.589792 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:20.590471 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:21.089623 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:21.090325 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:21.590064 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:21.590704 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:22.089246 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:22.089877 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:22.589532 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:22.590247 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:23.089591 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:23.090299 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:23.590024 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:23.590655 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:24.090108 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:24.090689 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:24.590007 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:24.590621 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:25.089234 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:25.089831 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:25.589400 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:25.590018 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:26.089228 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:26.089888 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:26.589254 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:26.589926 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:27.089410 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:27.090176 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:27.589834 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:27.590517 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:28.090191 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:28.090864 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:28.589546 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:28.590258 1860751 api_server.go:269] stopped: https://192.168.50.145:8444/healthz: Get "https://192.168.50.145:8444/healthz": dial tcp 192.168.50.145:8444: connect: connection refused
	I0127 14:15:29.089951 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:30.284831 1860751 api_server.go:279] https://192.168.50.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:15:30.284873 1860751 api_server.go:103] status: https://192.168.50.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:15:30.284905 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:30.297807 1860751 api_server.go:279] https://192.168.50.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:15:30.297837 1860751 api_server.go:103] status: https://192.168.50.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:15:30.589243 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:30.595411 1860751 api_server.go:279] https://192.168.50.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:15:30.595442 1860751 api_server.go:103] status: https://192.168.50.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:15:31.089251 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:31.095802 1860751 api_server.go:279] https://192.168.50.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:15:31.095830 1860751 api_server.go:103] status: https://192.168.50.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:15:31.589460 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:15:31.594163 1860751 api_server.go:279] https://192.168.50.145:8444/healthz returned 200:
	ok
	I0127 14:15:31.600479 1860751 api_server.go:141] control plane version: v1.32.1
	I0127 14:15:31.600508 1860751 api_server.go:131] duration metric: took 44.011421055s to wait for apiserver health ...
	I0127 14:15:31.600518 1860751 cni.go:84] Creating CNI manager for ""
	I0127 14:15:31.600525 1860751 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:15:31.602330 1860751 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:15:31.603799 1860751 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:15:31.616855 1860751 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:15:31.634144 1860751 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:15:31.644075 1860751 system_pods.go:59] 8 kube-system pods found
	I0127 14:15:31.644102 1860751 system_pods.go:61] "coredns-668d6bf9bc-52l5g" [6e72745a-ff82-4372-84aa-283e764b1949] Running
	I0127 14:15:31.644107 1860751 system_pods.go:61] "etcd-default-k8s-diff-port-212529" [e6147b27-8ecd-4a9b-bb9a-4bda981ff5ba] Running
	I0127 14:15:31.644112 1860751 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-212529" [bb1d17f7-d232-4774-9b52-8210fe7491e7] Running
	I0127 14:15:31.644115 1860751 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-212529" [d438882a-2f62-48b8-b543-b1ac8ee889b6] Running
	I0127 14:15:31.644118 1860751 system_pods.go:61] "kube-proxy-mltkm" [fe6b066b-967b-4257-a01e-4a989082767f] Running
	I0127 14:15:31.644123 1860751 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-212529" [144e41df-3ffd-4b1f-900d-30858b7420f5] Running
	I0127 14:15:31.644127 1860751 system_pods.go:61] "metrics-server-f79f97bbb-m4ddb" [e2eb8576-fe90-4860-b7c5-4e0edf055b6d] Pending
	I0127 14:15:31.644130 1860751 system_pods.go:61] "storage-provisioner" [96d15171-9cfd-414e-825a-f4a2c5fad635] Running
	I0127 14:15:31.644135 1860751 system_pods.go:74] duration metric: took 9.971932ms to wait for pod list to return data ...
	I0127 14:15:31.644144 1860751 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:15:31.646850 1860751 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:15:31.646875 1860751 node_conditions.go:123] node cpu capacity is 2
	I0127 14:15:31.646890 1860751 node_conditions.go:105] duration metric: took 2.739298ms to run NodePressure ...
	I0127 14:15:31.646913 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:15:31.920207 1860751 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 14:15:31.924010 1860751 retry.go:31] will retry after 218.437122ms: kubelet not initialised
	I0127 14:15:32.149005 1860751 retry.go:31] will retry after 314.292452ms: kubelet not initialised
	I0127 14:15:32.467315 1860751 retry.go:31] will retry after 467.134811ms: kubelet not initialised
	I0127 14:15:32.939808 1860751 retry.go:31] will retry after 635.825134ms: kubelet not initialised
	I0127 14:15:33.582515 1860751 retry.go:31] will retry after 1.29352974s: kubelet not initialised
	I0127 14:15:34.882042 1860751 retry.go:31] will retry after 1.087100488s: kubelet not initialised
	I0127 14:15:35.975175 1860751 retry.go:31] will retry after 3.331207322s: kubelet not initialised
	I0127 14:15:39.313626 1860751 retry.go:31] will retry after 3.668118024s: kubelet not initialised
	I0127 14:15:42.989018 1860751 kubeadm.go:739] kubelet initialised
	I0127 14:15:42.989053 1860751 kubeadm.go:740] duration metric: took 11.068814647s waiting for restarted kubelet to initialise ...
	I0127 14:15:42.989062 1860751 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:15:42.994899 1860751 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace to be "Ready" ...
	I0127 14:15:45.002006 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:47.003960 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:49.004418 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:51.501634 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:53.552740 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:56.001952 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:58.003235 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:00.501065 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:02.501533 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:04.502423 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:07.001629 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:09.502547 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:12.002558 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:14.001525 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace has status "Ready":"True"
	I0127 14:16:14.001550 1860751 pod_ready.go:82] duration metric: took 31.006623903s for pod "coredns-668d6bf9bc-52l5g" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.001559 1860751 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.006944 1860751 pod_ready.go:93] pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:16:14.006971 1860751 pod_ready.go:82] duration metric: took 5.405213ms for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.006980 1860751 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.011758 1860751 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:16:14.011787 1860751 pod_ready.go:82] duration metric: took 4.799325ms for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.011801 1860751 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.015737 1860751 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:16:14.015756 1860751 pod_ready.go:82] duration metric: took 3.94811ms for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.015765 1860751 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mltkm" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.019402 1860751 pod_ready.go:93] pod "kube-proxy-mltkm" in "kube-system" namespace has status "Ready":"True"
	I0127 14:16:14.019422 1860751 pod_ready.go:82] duration metric: took 3.651806ms for pod "kube-proxy-mltkm" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.019430 1860751 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.399329 1860751 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:16:14.399354 1860751 pod_ready.go:82] duration metric: took 379.917763ms for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:14.399364 1860751 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:16.405954 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:18.906278 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:21.405460 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:23.406558 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:25.905476 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:27.905946 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:30.404725 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:32.407699 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:34.904888 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:36.905441 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:38.906591 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:41.405975 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:43.406102 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:45.905598 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:48.405429 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:50.906255 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:52.906337 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:54.913984 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:57.404965 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:59.405948 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:01.906202 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:04.405894 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:06.906095 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:09.406184 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:11.905676 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:14.405357 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:16.405565 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:18.405940 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:20.905776 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:23.406000 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:25.904994 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:27.906399 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:30.405535 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:32.405731 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:34.407723 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:36.905447 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:38.908139 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:41.405928 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:43.407509 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:45.408727 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:47.906712 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:49.906830 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:51.906917 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:54.406624 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:56.905556 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:17:59.405465 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:01.405583 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:03.905998 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:06.406917 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:08.905409 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:10.908324 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:13.406859 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:16.012013 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:18.409379 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:20.906154 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:22.907278 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:25.406890 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:27.904806 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:29.906321 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:32.406594 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:34.406813 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:36.406955 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:38.904663 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:40.906503 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:42.906895 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:45.405827 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:47.405959 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:49.408149 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:51.906048 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:53.906496 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:55.906587 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:58.405047 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:00.906487 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:03.405134 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:05.405708 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:07.406716 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:09.905446 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:11.906081 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:13.906183 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:16.407410 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:18.408328 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:20.905706 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:22.906390 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:25.405847 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:27.406081 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:29.406653 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:31.905101 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:33.906032 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:36.406416 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:38.905541 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:41.405451 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:43.405883 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:45.905497 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:47.905917 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:50.405296 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:52.405989 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:54.905953 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:56.906021 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:58.906598 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:01.405909 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:03.406128 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:05.906092 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:08.405216 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:10.405449 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:12.905583 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:14.399935 1860751 pod_ready.go:82] duration metric: took 4m0.000530283s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" ...
	E0127 14:20:14.399966 1860751 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 14:20:14.399992 1860751 pod_ready.go:39] duration metric: took 4m31.410913398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:14.400032 1860751 kubeadm.go:597] duration metric: took 5m29.594675564s to restartPrimaryControlPlane
	W0127 14:20:14.400141 1860751 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 14:20:14.400175 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 14:20:15.909704 1860751 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.509493932s)
	I0127 14:20:15.909782 1860751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:20:15.925857 1860751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:20:15.935803 1860751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:20:15.946508 1860751 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:20:15.946527 1860751 kubeadm.go:157] found existing configuration files:
	
	I0127 14:20:15.946566 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 14:20:15.956633 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:20:15.956690 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:20:15.966965 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 14:20:15.984740 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:20:15.984801 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:20:15.995541 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 14:20:16.005543 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:20:16.005605 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:20:16.015855 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 14:20:16.025594 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:20:16.025640 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:20:16.035989 1860751 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:20:16.197395 1860751 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:20:24.074171 1860751 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:20:24.074259 1860751 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:20:24.074369 1860751 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:20:24.074528 1860751 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:20:24.074657 1860751 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:20:24.074731 1860751 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:20:24.076292 1860751 out.go:235]   - Generating certificates and keys ...
	I0127 14:20:24.076373 1860751 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:20:24.076450 1860751 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:20:24.076532 1860751 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:20:24.076585 1860751 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:20:24.076644 1860751 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:20:24.076713 1860751 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:20:24.076800 1860751 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:20:24.076884 1860751 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:20:24.076992 1860751 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:20:24.077103 1860751 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:20:24.077169 1860751 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:20:24.077243 1860751 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:20:24.077289 1860751 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:20:24.077349 1860751 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:20:24.077397 1860751 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:20:24.077468 1860751 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:20:24.077537 1860751 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:20:24.077610 1860751 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:20:24.077669 1860751 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:20:24.078852 1860751 out.go:235]   - Booting up control plane ...
	I0127 14:20:24.078965 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:20:24.079055 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:20:24.079140 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:20:24.079285 1860751 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:20:24.079429 1860751 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:20:24.079489 1860751 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:20:24.079690 1860751 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:20:24.079833 1860751 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:20:24.079921 1860751 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.61135ms
	I0127 14:20:24.080007 1860751 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:20:24.080110 1860751 kubeadm.go:310] [api-check] The API server is healthy after 5.001239504s
	I0127 14:20:24.080256 1860751 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:20:24.080387 1860751 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:20:24.080441 1860751 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:20:24.080637 1860751 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-212529 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:20:24.080711 1860751 kubeadm.go:310] [bootstrap-token] Using token: pxjq5d.hk6ws8nooc0hkr03
	I0127 14:20:24.082018 1860751 out.go:235]   - Configuring RBAC rules ...
	I0127 14:20:24.082176 1860751 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:20:24.082314 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:20:24.082518 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:20:24.082703 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:20:24.082889 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:20:24.083015 1860751 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:20:24.083173 1860751 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:20:24.083250 1860751 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:20:24.083301 1860751 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:20:24.083311 1860751 kubeadm.go:310] 
	I0127 14:20:24.083396 1860751 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:20:24.083407 1860751 kubeadm.go:310] 
	I0127 14:20:24.083513 1860751 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:20:24.083522 1860751 kubeadm.go:310] 
	I0127 14:20:24.083558 1860751 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:20:24.083655 1860751 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:20:24.083734 1860751 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:20:24.083743 1860751 kubeadm.go:310] 
	I0127 14:20:24.083802 1860751 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:20:24.083810 1860751 kubeadm.go:310] 
	I0127 14:20:24.083852 1860751 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:20:24.083858 1860751 kubeadm.go:310] 
	I0127 14:20:24.083921 1860751 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:20:24.084043 1860751 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:20:24.084140 1860751 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:20:24.084149 1860751 kubeadm.go:310] 
	I0127 14:20:24.084263 1860751 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:20:24.084383 1860751 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:20:24.084400 1860751 kubeadm.go:310] 
	I0127 14:20:24.084497 1860751 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token pxjq5d.hk6ws8nooc0hkr03 \
	I0127 14:20:24.084584 1860751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e \
	I0127 14:20:24.084604 1860751 kubeadm.go:310] 	--control-plane 
	I0127 14:20:24.084610 1860751 kubeadm.go:310] 
	I0127 14:20:24.084679 1860751 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:20:24.084685 1860751 kubeadm.go:310] 
	I0127 14:20:24.084750 1860751 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pxjq5d.hk6ws8nooc0hkr03 \
	I0127 14:20:24.084894 1860751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e 
	I0127 14:20:24.084923 1860751 cni.go:84] Creating CNI manager for ""
	I0127 14:20:24.084937 1860751 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:20:24.086257 1860751 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:20:24.087300 1860751 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:20:24.097744 1860751 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:20:24.115867 1860751 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:20:24.115958 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:24.115962 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-212529 minikube.k8s.io/updated_at=2025_01_27T14_20_24_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=default-k8s-diff-port-212529 minikube.k8s.io/primary=true
	I0127 14:20:24.324045 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:24.324042 1860751 ops.go:34] apiserver oom_adj: -16
	I0127 14:20:24.824528 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:25.324196 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:25.824971 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:26.324285 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:26.825007 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:27.324812 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:27.824252 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:28.324496 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:28.413845 1860751 kubeadm.go:1113] duration metric: took 4.297974897s to wait for elevateKubeSystemPrivileges
	I0127 14:20:28.413890 1860751 kubeadm.go:394] duration metric: took 5m43.681075591s to StartCluster
	I0127 14:20:28.413911 1860751 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:20:28.414029 1860751 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:20:28.416135 1860751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:20:28.416434 1860751 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.145 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 14:20:28.416580 1860751 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:20:28.416710 1860751 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416715 1860751 config.go:182] Loaded profile config "default-k8s-diff-port-212529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:20:28.416736 1860751 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.416745 1860751 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:20:28.416742 1860751 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416759 1860751 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416785 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.416797 1860751 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.416807 1860751 addons.go:247] addon dashboard should already be in state true
	I0127 14:20:28.416843 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.417198 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417233 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.417240 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417275 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.416772 1860751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-212529"
	I0127 14:20:28.416777 1860751 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.417322 1860751 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.417337 1860751 addons.go:247] addon metrics-server should already be in state true
	I0127 14:20:28.417560 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.417900 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417916 1860751 out.go:177] * Verifying Kubernetes components...
	I0127 14:20:28.417955 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.417963 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.418005 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.419061 1860751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:20:28.434949 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
	I0127 14:20:28.435505 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.436082 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.436114 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.436521 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.436752 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.437523 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0127 14:20:28.437697 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I0127 14:20:28.438072 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.438417 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.438657 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.438682 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.438906 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.438929 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.439056 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.439281 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.439489 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39481
	I0127 14:20:28.439624 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.439660 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.439804 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.439846 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.439944 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.440409 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.440432 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.440811 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.441377 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.441420 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.441785 1860751 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.441804 1860751 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:20:28.441836 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.442074 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.442111 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.460558 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
	I0127 14:20:28.461043 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38533
	I0127 14:20:28.461200 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.461461 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.461725 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.461749 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.461814 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0127 14:20:28.462061 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.462083 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.462286 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.462330 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.462485 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.462605 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.462762 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.462775 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.462832 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.463228 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.463817 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.463862 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.464659 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.465253 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.466108 1860751 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:20:28.466667 1860751 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:20:28.467300 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:20:28.467316 1860751 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:20:28.467333 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.469055 1860751 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:20:28.469287 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38797
	I0127 14:20:28.469629 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.470009 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:20:28.470027 1860751 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:20:28.470055 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.470158 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.470180 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.470774 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.470967 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.471164 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.471781 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.471814 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.472153 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.472327 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.472488 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.472639 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.473502 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.473853 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.474311 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.474338 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.474488 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.474652 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.474805 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.474896 1860751 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:20:28.474964 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.475898 1860751 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:20:28.475916 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:20:28.475933 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.478521 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.478927 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.478950 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.479131 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.479325 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.479479 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.479622 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.482246 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I0127 14:20:28.482637 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.483047 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.483068 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.483409 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.483542 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.484999 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.485241 1860751 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:20:28.485259 1860751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:20:28.485276 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.488061 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.488402 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.488429 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.488581 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.488725 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.488858 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.489030 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.646865 1860751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:20:28.672532 1860751 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-212529" to be "Ready" ...
	I0127 14:20:28.703176 1860751 node_ready.go:49] node "default-k8s-diff-port-212529" has status "Ready":"True"
	I0127 14:20:28.703197 1860751 node_ready.go:38] duration metric: took 30.636379ms for node "default-k8s-diff-port-212529" to be "Ready" ...
	I0127 14:20:28.703206 1860751 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:28.710494 1860751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:28.817820 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:20:28.817849 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:20:28.837871 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:20:28.851072 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:20:28.851107 1860751 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:20:28.852529 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:20:28.858946 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:20:28.858978 1860751 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:20:28.897376 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:20:28.897409 1860751 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:20:28.944458 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:20:28.944489 1860751 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:20:28.996770 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:20:28.996799 1860751 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:20:29.041836 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:20:29.066199 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:20:29.066234 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:20:29.191066 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:20:29.191092 1860751 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:20:29.292937 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:20:29.292970 1860751 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:20:29.324574 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:20:29.324605 1860751 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:20:29.381589 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:20:29.381618 1860751 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:20:29.579396 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:20:29.579421 1860751 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:20:29.730806 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:20:30.332634 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.480056609s)
	I0127 14:20:30.332719 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.332740 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.332753 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.494842628s)
	I0127 14:20:30.332799 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.332812 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333060 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333080 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.333120 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.333128 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333246 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333271 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.333280 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.333287 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333331 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Closing plugin on server side
	I0127 14:20:30.333499 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333513 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.335273 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.335291 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.402574 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.402607 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.402929 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.402951 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.597814 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.555933063s)
	I0127 14:20:30.597873 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.597890 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.598223 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.598244 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.598254 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.598262 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.598523 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.598545 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.598558 1860751 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-212529"
	I0127 14:20:30.720235 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:31.251992 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.52112686s)
	I0127 14:20:31.252076 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:31.252099 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:31.252456 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:31.252477 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:31.252487 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:31.252495 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:31.252788 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:31.252797 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Closing plugin on server side
	I0127 14:20:31.252810 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:31.254461 1860751 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-212529 addons enable metrics-server
	
	I0127 14:20:31.255681 1860751 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 14:20:31.256922 1860751 addons.go:514] duration metric: took 2.840355251s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 14:20:33.216592 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:35.217244 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:37.731702 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.731733 1860751 pod_ready.go:82] duration metric: took 9.021206919s for pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.731747 1860751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.761047 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.761074 1860751 pod_ready.go:82] duration metric: took 29.318136ms for pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.761084 1860751 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.772463 1860751 pod_ready.go:93] pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.772491 1860751 pod_ready.go:82] duration metric: took 11.399303ms for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.772504 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.780269 1860751 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.780294 1860751 pod_ready.go:82] duration metric: took 7.782307ms for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.780306 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.785276 1860751 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.785304 1860751 pod_ready.go:82] duration metric: took 4.986421ms for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.785315 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5fmd" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.114939 1860751 pod_ready.go:93] pod "kube-proxy-f5fmd" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:38.114969 1860751 pod_ready.go:82] duration metric: took 329.644964ms for pod "kube-proxy-f5fmd" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.114981 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.515806 1860751 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:38.515832 1860751 pod_ready.go:82] duration metric: took 400.844808ms for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.515841 1860751 pod_ready.go:39] duration metric: took 9.812625577s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:38.515859 1860751 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:20:38.515918 1860751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:20:38.534333 1860751 api_server.go:72] duration metric: took 10.117851719s to wait for apiserver process to appear ...
	I0127 14:20:38.534364 1860751 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:20:38.534390 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:20:38.540410 1860751 api_server.go:279] https://192.168.50.145:8444/healthz returned 200:
	ok
	I0127 14:20:38.541651 1860751 api_server.go:141] control plane version: v1.32.1
	I0127 14:20:38.541674 1860751 api_server.go:131] duration metric: took 7.30288ms to wait for apiserver health ...
	I0127 14:20:38.541685 1860751 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:20:38.725366 1860751 system_pods.go:59] 9 kube-system pods found
	I0127 14:20:38.725397 1860751 system_pods.go:61] "coredns-668d6bf9bc-g77l4" [4457b047-3339-455e-ab06-15a1e4d7a95f] Running
	I0127 14:20:38.725402 1860751 system_pods.go:61] "coredns-668d6bf9bc-gwfcp" [d557581e-b74a-482d-9c8c-12e1b51d11d5] Running
	I0127 14:20:38.725406 1860751 system_pods.go:61] "etcd-default-k8s-diff-port-212529" [1e347129-845b-4c34-831c-e056cccc90f7] Running
	I0127 14:20:38.725410 1860751 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-212529" [1472d317-bd0d-4957-a955-d69eb5339d2a] Running
	I0127 14:20:38.725414 1860751 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-212529" [0e5e7440-7389-4bc8-9ee5-0e8041edef25] Running
	I0127 14:20:38.725417 1860751 system_pods.go:61] "kube-proxy-f5fmd" [a08f6d90-467b-4972-8c03-d62d07e108e5] Running
	I0127 14:20:38.725422 1860751 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-212529" [34188644-73d6-4567-856a-895cef0abac8] Running
	I0127 14:20:38.725431 1860751 system_pods.go:61] "metrics-server-f79f97bbb-gpkgd" [ec65f4da-1a84-4dab-9969-3ed09e9fdce2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:20:38.725436 1860751 system_pods.go:61] "storage-provisioner" [72ed4f2a-f894-4246-8596-b02befc5fde4] Running
	I0127 14:20:38.725448 1860751 system_pods.go:74] duration metric: took 183.756587ms to wait for pod list to return data ...
	I0127 14:20:38.725461 1860751 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:20:38.916064 1860751 default_sa.go:45] found service account: "default"
	I0127 14:20:38.916100 1860751 default_sa.go:55] duration metric: took 190.628425ms for default service account to be created ...
	I0127 14:20:38.916114 1860751 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:20:39.121453 1860751 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-212529 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-212529 -n default-k8s-diff-port-212529
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-212529 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-212529 logs -n 25: (1.171975417s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p embed-certs-635679                 | embed-certs-635679           | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-635679                                  | embed-certs-635679           | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-591346                  | no-preload-591346            | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-591346                                   | no-preload-591346            | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-212529       | default-k8s-diff-port-212529 | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-212529 | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC |                     |
	|         | default-k8s-diff-port-212529                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-908018             | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-908018 image                           | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	| delete  | -p old-k8s-version-908018                              | old-k8s-version-908018       | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	| start   | -p newest-cni-309688 --memory=2200 --alsologtostderr   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-309688             | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-309688                  | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-309688 --memory=2200 --alsologtostderr   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-309688 image list                           | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	| delete  | -p newest-cni-309688                                   | newest-cni-309688            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	| delete  | -p no-preload-591346                                   | no-preload-591346            | jenkins | v1.35.0 | 27 Jan 25 14:40 UTC | 27 Jan 25 14:40 UTC |
	| delete  | -p embed-certs-635679                                  | embed-certs-635679           | jenkins | v1.35.0 | 27 Jan 25 14:40 UTC | 27 Jan 25 14:40 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:18:41
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:18:41.854015 1863329 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:18:41.854179 1863329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:18:41.854190 1863329 out.go:358] Setting ErrFile to fd 2...
	I0127 14:18:41.854197 1863329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:18:41.854387 1863329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 14:18:41.855024 1863329 out.go:352] Setting JSON to false
	I0127 14:18:41.856109 1863329 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":39663,"bootTime":1737947859,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:18:41.856224 1863329 start.go:139] virtualization: kvm guest
	I0127 14:18:41.858116 1863329 out.go:177] * [newest-cni-309688] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:18:41.859411 1863329 notify.go:220] Checking for updates...
	I0127 14:18:41.859457 1863329 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:18:41.860616 1863329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:18:41.861927 1863329 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:18:41.863092 1863329 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 14:18:41.864171 1863329 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:18:41.865251 1863329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:18:41.866889 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:18:41.867384 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:41.867442 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:41.883915 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0127 14:18:41.884516 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:41.885154 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:18:41.885177 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:41.885640 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:41.885855 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:18:41.886202 1863329 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:18:41.886661 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:41.886728 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:41.904702 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
	I0127 14:18:41.905242 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:41.905789 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:18:41.905815 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:41.906241 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:41.906460 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:18:41.947119 1863329 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:18:41.948433 1863329 start.go:297] selected driver: kvm2
	I0127 14:18:41.948449 1863329 start.go:901] validating driver "kvm2" against &{Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:18:41.948615 1863329 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:18:41.949339 1863329 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:18:41.949417 1863329 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-1798877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:18:41.966476 1863329 install.go:137] /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:18:41.966978 1863329 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 14:18:41.967016 1863329 cni.go:84] Creating CNI manager for ""
	I0127 14:18:41.967062 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:18:41.967095 1863329 start.go:340] cluster config:
	{Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:18:41.967211 1863329 iso.go:125] acquiring lock: {Name:mk3326e4e64b9d95edc1453384276c21a2957c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:18:41.969136 1863329 out.go:177] * Starting "newest-cni-309688" primary control-plane node in "newest-cni-309688" cluster
	I0127 14:18:41.970047 1863329 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:18:41.970083 1863329 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 14:18:41.970090 1863329 cache.go:56] Caching tarball of preloaded images
	I0127 14:18:41.970203 1863329 preload.go:172] Found /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 14:18:41.970215 1863329 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 14:18:41.970348 1863329 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/config.json ...
	I0127 14:18:41.970570 1863329 start.go:360] acquireMachinesLock for newest-cni-309688: {Name:mk6fcac41a7a21b211b65e56994e625852d1a781 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:18:41.970626 1863329 start.go:364] duration metric: took 32.288µs to acquireMachinesLock for "newest-cni-309688"
	I0127 14:18:41.970646 1863329 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:18:41.970657 1863329 fix.go:54] fixHost starting: 
	I0127 14:18:41.971072 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:18:41.971127 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:18:41.987333 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0127 14:18:41.987957 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:18:41.988457 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:18:41.988482 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:18:41.988963 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:18:41.989252 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:18:41.989407 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:18:41.991188 1863329 fix.go:112] recreateIfNeeded on newest-cni-309688: state=Stopped err=<nil>
	I0127 14:18:41.991220 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	W0127 14:18:41.991396 1863329 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:18:41.993400 1863329 out.go:177] * Restarting existing kvm2 VM for "newest-cni-309688" ...
	I0127 14:18:39.739774 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:18:39.739799 1860441 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:18:39.776579 1860441 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:18:39.776612 1860441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:18:39.821641 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:18:39.821669 1860441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:18:39.837528 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:18:39.899562 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:18:39.899592 1860441 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:18:39.941841 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:18:39.941883 1860441 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:18:39.958020 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:18:39.958049 1860441 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:18:39.985706 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:18:39.985733 1860441 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:18:40.018166 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:18:40.018198 1860441 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:18:40.049338 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:18:40.335449 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335486 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335522 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335544 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335886 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.335906 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.335921 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.335932 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.335939 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.335940 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.336011 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336058 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.336071 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.336079 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.336199 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336202 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.336210 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.336321 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.336339 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.361215 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.361236 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.361528 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.361572 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.361588 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.976702 1860441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139130092s)
	I0127 14:18:40.976753 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.976768 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.977190 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.977233 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.977244 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.977254 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:40.977278 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:40.977544 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:40.977626 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:40.977659 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:40.977685 1860441 addons.go:479] Verifying addon metrics-server=true in "no-preload-591346"
	I0127 14:18:41.537877 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:41.993401 1860441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.943993844s)
	I0127 14:18:41.993457 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:41.993474 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:41.993713 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:41.993737 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:41.993755 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:41.993778 1860441 main.go:141] libmachine: Making call to close driver server
	I0127 14:18:41.993785 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
	I0127 14:18:41.994133 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
	I0127 14:18:41.994158 1860441 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:18:41.994172 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:18:41.995251 1860441 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-591346 addons enable metrics-server
	
	I0127 14:18:41.996556 1860441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 14:18:41.997692 1860441 addons.go:514] duration metric: took 2.74201161s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 14:18:43.539748 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:40.906503 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:42.906895 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:45.405827 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:41.996357 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Start
	I0127 14:18:41.996613 1863329 main.go:141] libmachine: (newest-cni-309688) starting domain...
	I0127 14:18:41.996630 1863329 main.go:141] libmachine: (newest-cni-309688) ensuring networks are active...
	I0127 14:18:41.997620 1863329 main.go:141] libmachine: (newest-cni-309688) Ensuring network default is active
	I0127 14:18:41.998106 1863329 main.go:141] libmachine: (newest-cni-309688) Ensuring network mk-newest-cni-309688 is active
	I0127 14:18:41.998535 1863329 main.go:141] libmachine: (newest-cni-309688) getting domain XML...
	I0127 14:18:41.999349 1863329 main.go:141] libmachine: (newest-cni-309688) creating domain...
	I0127 14:18:43.362085 1863329 main.go:141] libmachine: (newest-cni-309688) waiting for IP...
	I0127 14:18:43.363264 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:43.363792 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:43.363901 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.363777 1863364 retry.go:31] will retry after 245.978549ms: waiting for domain to come up
	I0127 14:18:43.611613 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:43.612280 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:43.612314 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.612267 1863364 retry.go:31] will retry after 277.473907ms: waiting for domain to come up
	I0127 14:18:43.891925 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:43.892577 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:43.892608 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.892527 1863364 retry.go:31] will retry after 327.737062ms: waiting for domain to come up
	I0127 14:18:44.221804 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:44.222337 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:44.222385 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:44.222298 1863364 retry.go:31] will retry after 472.286938ms: waiting for domain to come up
	I0127 14:18:44.695804 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:44.696473 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:44.696498 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:44.696438 1863364 retry.go:31] will retry after 556.965256ms: waiting for domain to come up
	I0127 14:18:45.254693 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:45.255242 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:45.255276 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:45.255189 1863364 retry.go:31] will retry after 809.038394ms: waiting for domain to come up
	I0127 14:18:46.066036 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:46.066585 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:46.066616 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:46.066540 1863364 retry.go:31] will retry after 758.303359ms: waiting for domain to come up
	I0127 14:18:46.826373 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:46.826997 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:46.827029 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:46.826933 1863364 retry.go:31] will retry after 1.102767077s: waiting for domain to come up
	I0127 14:18:46.040082 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:47.537709 1860441 pod_ready.go:93] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.537735 1860441 pod_ready.go:82] duration metric: took 8.005981983s for pod "etcd-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.537745 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.545174 1860441 pod_ready.go:93] pod "kube-apiserver-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.545199 1860441 pod_ready.go:82] duration metric: took 7.447836ms for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.545210 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.564920 1860441 pod_ready.go:93] pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.564957 1860441 pod_ready.go:82] duration metric: took 19.735587ms for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.564973 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k69dv" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.588782 1860441 pod_ready.go:93] pod "kube-proxy-k69dv" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.588811 1860441 pod_ready.go:82] duration metric: took 23.829861ms for pod "kube-proxy-k69dv" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.588824 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.598620 1860441 pod_ready.go:93] pod "kube-scheduler-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
	I0127 14:18:47.598656 1860441 pod_ready.go:82] duration metric: took 9.822306ms for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
	I0127 14:18:47.598668 1860441 pod_ready.go:39] duration metric: took 8.076081083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:18:47.598693 1860441 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:18:47.598793 1860441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:18:47.615862 1860441 api_server.go:72] duration metric: took 8.36019503s to wait for apiserver process to appear ...
	I0127 14:18:47.615895 1860441 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:18:47.615918 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0127 14:18:47.631872 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0127 14:18:47.632742 1860441 api_server.go:141] control plane version: v1.32.1
	I0127 14:18:47.632766 1860441 api_server.go:131] duration metric: took 16.863539ms to wait for apiserver health ...
	I0127 14:18:47.632774 1860441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:18:47.739770 1860441 system_pods.go:59] 9 kube-system pods found
	I0127 14:18:47.739814 1860441 system_pods.go:61] "coredns-668d6bf9bc-cm66w" [97ffe415-a70c-44a4-aa07-5b99576c749d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:18:47.739824 1860441 system_pods.go:61] "coredns-668d6bf9bc-lq9hg" [688b4191-8c28-440b-bc93-d52964fe105c] Running
	I0127 14:18:47.739833 1860441 system_pods.go:61] "etcd-no-preload-591346" [01ae260c-cbf6-4f04-be4e-565f3f408c45] Running
	I0127 14:18:47.739838 1860441 system_pods.go:61] "kube-apiserver-no-preload-591346" [1433350f-5302-42e1-8763-0f8bbde34676] Running
	I0127 14:18:47.739842 1860441 system_pods.go:61] "kube-controller-manager-no-preload-591346" [49eab0a5-09c9-4a2d-9913-1b45c145b52a] Running
	I0127 14:18:47.739846 1860441 system_pods.go:61] "kube-proxy-k69dv" [393d6681-7d87-479a-94d3-5ff6cbfe16ed] Running
	I0127 14:18:47.739849 1860441 system_pods.go:61] "kube-scheduler-no-preload-591346" [9f5af2ad-71a3-4481-a18a-8477f843553a] Running
	I0127 14:18:47.739855 1860441 system_pods.go:61] "metrics-server-f79f97bbb-fqckz" [30644e2b-7988-4b55-aa94-fe774b820ed4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:18:47.739859 1860441 system_pods.go:61] "storage-provisioner" [f10d2d4c-7f96-4ff6-b6ae-71b7918fd3ee] Running
	I0127 14:18:47.739866 1860441 system_pods.go:74] duration metric: took 107.08564ms to wait for pod list to return data ...
	I0127 14:18:47.739874 1860441 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:18:47.936494 1860441 default_sa.go:45] found service account: "default"
	I0127 14:18:47.936524 1860441 default_sa.go:55] duration metric: took 196.641742ms for default service account to be created ...
	I0127 14:18:47.936536 1860441 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:18:48.139726 1860441 system_pods.go:87] 9 kube-system pods found
	I0127 14:18:47.405959 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:49.408149 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:47.931337 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:47.931793 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:47.931838 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:47.931776 1863364 retry.go:31] will retry after 1.120510293s: waiting for domain to come up
	I0127 14:18:49.053548 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:49.054204 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:49.054231 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:49.054156 1863364 retry.go:31] will retry after 1.733549309s: waiting for domain to come up
	I0127 14:18:50.790083 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:50.790567 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:50.790650 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:50.790566 1863364 retry.go:31] will retry after 1.990202359s: waiting for domain to come up
	I0127 14:18:51.906048 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:53.906496 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:52.782229 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:52.782850 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:52.782892 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:52.782738 1863364 retry.go:31] will retry after 2.327681841s: waiting for domain to come up
	I0127 14:18:55.113291 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:55.113832 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:55.113864 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:55.113778 1863364 retry.go:31] will retry after 3.526138042s: waiting for domain to come up
	I0127 14:18:55.906587 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:58.405047 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:18:58.641406 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:18:58.642022 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
	I0127 14:18:58.642056 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:58.641994 1863364 retry.go:31] will retry after 5.217691775s: waiting for domain to come up
	I0127 14:19:00.906487 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:03.405134 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:05.405708 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:03.862320 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.862779 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has current primary IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.862804 1863329 main.go:141] libmachine: (newest-cni-309688) found domain IP: 192.168.72.17
	I0127 14:19:03.862815 1863329 main.go:141] libmachine: (newest-cni-309688) reserving static IP address...
	I0127 14:19:03.863295 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "newest-cni-309688", mac: "52:54:00:1b:25:ab", ip: "192.168.72.17"} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.863323 1863329 main.go:141] libmachine: (newest-cni-309688) reserved static IP address 192.168.72.17 for domain newest-cni-309688
	I0127 14:19:03.863342 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | skip adding static IP to network mk-newest-cni-309688 - found existing host DHCP lease matching {name: "newest-cni-309688", mac: "52:54:00:1b:25:ab", ip: "192.168.72.17"}
	I0127 14:19:03.863372 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Getting to WaitForSSH function...
	I0127 14:19:03.863389 1863329 main.go:141] libmachine: (newest-cni-309688) waiting for SSH...
	I0127 14:19:03.865894 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.866214 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.866242 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.866399 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Using SSH client type: external
	I0127 14:19:03.866428 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa (-rw-------)
	I0127 14:19:03.866460 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:19:03.866485 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | About to run SSH command:
	I0127 14:19:03.866510 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | exit 0
	I0127 14:19:03.986391 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | SSH cmd err, output: <nil>: 
	I0127 14:19:03.986778 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetConfigRaw
	I0127 14:19:03.987411 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:03.990205 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.990686 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.990714 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.990989 1863329 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/config.json ...
	I0127 14:19:03.991197 1863329 machine.go:93] provisionDockerMachine start ...
	I0127 14:19:03.991218 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:03.991433 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:03.993663 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.993956 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:03.994002 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:03.994179 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:03.994359 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:03.994518 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:03.994653 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:03.994863 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:03.995069 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:03.995080 1863329 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:19:04.094835 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 14:19:04.094866 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
	I0127 14:19:04.095102 1863329 buildroot.go:166] provisioning hostname "newest-cni-309688"
	I0127 14:19:04.095129 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
	I0127 14:19:04.095318 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.097835 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.098248 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.098281 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.098404 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.098576 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.098735 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.098905 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.099088 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:04.099267 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:04.099282 1863329 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-309688 && echo "newest-cni-309688" | sudo tee /etc/hostname
	I0127 14:19:04.213036 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-309688
	
	I0127 14:19:04.213082 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.215824 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.216184 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.216208 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.216357 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.216549 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.216671 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.216793 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.216979 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:04.217204 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:04.217230 1863329 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-309688' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-309688/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-309688' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:19:04.329169 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:19:04.329206 1863329 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-1798877/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-1798877/.minikube}
	I0127 14:19:04.329248 1863329 buildroot.go:174] setting up certificates
	I0127 14:19:04.329259 1863329 provision.go:84] configureAuth start
	I0127 14:19:04.329269 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
	I0127 14:19:04.329540 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:04.332411 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.332850 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.332878 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.333078 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.335728 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.336143 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.336174 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.336351 1863329 provision.go:143] copyHostCerts
	I0127 14:19:04.336415 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem, removing ...
	I0127 14:19:04.336439 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem
	I0127 14:19:04.336527 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem (1078 bytes)
	I0127 14:19:04.336664 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem, removing ...
	I0127 14:19:04.336677 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem
	I0127 14:19:04.336718 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem (1123 bytes)
	I0127 14:19:04.336806 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem, removing ...
	I0127 14:19:04.336817 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem
	I0127 14:19:04.336852 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem (1675 bytes)
	I0127 14:19:04.336995 1863329 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem org=jenkins.newest-cni-309688 san=[127.0.0.1 192.168.72.17 localhost minikube newest-cni-309688]
	I0127 14:19:04.445795 1863329 provision.go:177] copyRemoteCerts
	I0127 14:19:04.445894 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:19:04.445928 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.448735 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.449074 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.449106 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.449317 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.449501 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.449677 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.449816 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.528783 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:19:04.552897 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 14:19:04.575992 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:19:04.598152 1863329 provision.go:87] duration metric: took 268.879651ms to configureAuth
	I0127 14:19:04.598183 1863329 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:19:04.598397 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:19:04.598411 1863329 machine.go:96] duration metric: took 607.201271ms to provisionDockerMachine
	I0127 14:19:04.598421 1863329 start.go:293] postStartSetup for "newest-cni-309688" (driver="kvm2")
	I0127 14:19:04.598437 1863329 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:19:04.598481 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.598842 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:19:04.598874 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.601257 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.601599 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.601628 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.601759 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.601945 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.602093 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.602260 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.685084 1863329 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:19:04.689047 1863329 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:19:04.689081 1863329 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/addons for local assets ...
	I0127 14:19:04.689137 1863329 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/files for local assets ...
	I0127 14:19:04.689212 1863329 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem -> 18060702.pem in /etc/ssl/certs
	I0127 14:19:04.689300 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:19:04.698109 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:19:04.723269 1863329 start.go:296] duration metric: took 124.828224ms for postStartSetup
	I0127 14:19:04.723315 1863329 fix.go:56] duration metric: took 22.752659687s for fixHost
	I0127 14:19:04.723339 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.726123 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.726570 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.726601 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.726820 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.727042 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.727229 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.727405 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.727627 1863329 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:04.727869 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.17 22 <nil> <nil>}
	I0127 14:19:04.727885 1863329 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:19:04.831094 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987544.794055340
	
	I0127 14:19:04.831118 1863329 fix.go:216] guest clock: 1737987544.794055340
	I0127 14:19:04.831124 1863329 fix.go:229] Guest: 2025-01-27 14:19:04.79405534 +0000 UTC Remote: 2025-01-27 14:19:04.723319581 +0000 UTC m=+22.912787075 (delta=70.735759ms)
	I0127 14:19:04.831145 1863329 fix.go:200] guest clock delta is within tolerance: 70.735759ms
	I0127 14:19:04.831149 1863329 start.go:83] releasing machines lock for "newest-cni-309688", held for 22.860512585s
	I0127 14:19:04.831167 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.831433 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:04.834349 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.834694 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.834718 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.834915 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.835447 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.835626 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:04.835729 1863329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:19:04.835772 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.835799 1863329 ssh_runner.go:195] Run: cat /version.json
	I0127 14:19:04.835821 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:04.838501 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.838695 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.838855 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.838881 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.839077 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.839082 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:04.839117 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:04.839262 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:04.839272 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.839481 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:04.839482 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.839635 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:04.839648 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.839742 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:04.942379 1863329 ssh_runner.go:195] Run: systemctl --version
	I0127 14:19:04.948168 1863329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:19:04.953645 1863329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:19:04.953703 1863329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:19:04.969617 1863329 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:19:04.969646 1863329 start.go:495] detecting cgroup driver to use...
	I0127 14:19:04.969742 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 14:19:05.001151 1863329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 14:19:05.014859 1863329 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:19:05.014928 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:19:05.030145 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:19:05.044008 1863329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:19:05.174941 1863329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:19:05.330526 1863329 docker.go:233] disabling docker service ...
	I0127 14:19:05.330619 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:19:05.345183 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:19:05.357628 1863329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:19:05.474635 1863329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:19:05.587063 1863329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:19:05.600224 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:19:05.616896 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 14:19:05.628539 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 14:19:05.639531 1863329 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 14:19:05.639605 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 14:19:05.649978 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:19:05.659986 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 14:19:05.669665 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 14:19:05.680018 1863329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:19:05.690041 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 14:19:05.699586 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 14:19:05.709482 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 14:19:05.719643 1863329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:19:05.728454 1863329 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:19:05.728520 1863329 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:19:05.743292 1863329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:19:05.752875 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:05.862682 1863329 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 14:19:05.897001 1863329 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 14:19:05.897074 1863329 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:19:05.901946 1863329 retry.go:31] will retry after 1.257073282s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 14:19:07.159917 1863329 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 14:19:07.165117 1863329 start.go:563] Will wait 60s for crictl version
	I0127 14:19:07.165209 1863329 ssh_runner.go:195] Run: which crictl
	I0127 14:19:07.168995 1863329 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:19:07.209167 1863329 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 14:19:07.209244 1863329 ssh_runner.go:195] Run: containerd --version
	I0127 14:19:07.236320 1863329 ssh_runner.go:195] Run: containerd --version
	I0127 14:19:07.261054 1863329 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 14:19:07.262245 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
	I0127 14:19:07.265288 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:07.265739 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:07.265772 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:07.265980 1863329 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 14:19:07.270111 1863329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:19:07.283905 1863329 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 14:19:07.406716 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:09.905446 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:07.285143 1863329 kubeadm.go:883] updating cluster {Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:19:07.285271 1863329 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 14:19:07.285342 1863329 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:19:07.314913 1863329 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:19:07.314944 1863329 containerd.go:534] Images already preloaded, skipping extraction
	I0127 14:19:07.315010 1863329 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:19:07.345742 1863329 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 14:19:07.345770 1863329 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:19:07.345779 1863329 kubeadm.go:934] updating node { 192.168.72.17 8443 v1.32.1 containerd true true} ...
	I0127 14:19:07.345897 1863329 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-309688 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:19:07.345956 1863329 ssh_runner.go:195] Run: sudo crictl info
	I0127 14:19:07.379712 1863329 cni.go:84] Creating CNI manager for ""
	I0127 14:19:07.379740 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:19:07.379759 1863329 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 14:19:07.379800 1863329 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.17 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-309688 NodeName:newest-cni-309688 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:19:07.379979 1863329 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-309688"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.17"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.17"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:19:07.380049 1863329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:19:07.390315 1863329 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:19:07.390456 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:19:07.399585 1863329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0127 14:19:07.417531 1863329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:19:07.433514 1863329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 14:19:07.449318 1863329 ssh_runner.go:195] Run: grep 192.168.72.17	control-plane.minikube.internal$ /etc/hosts
	I0127 14:19:07.452848 1863329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:19:07.464375 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:07.590492 1863329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:19:07.609018 1863329 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688 for IP: 192.168.72.17
	I0127 14:19:07.609048 1863329 certs.go:194] generating shared ca certs ...
	I0127 14:19:07.609072 1863329 certs.go:226] acquiring lock for ca certs: {Name:mkc6b95fb3d2c0d0c7049cde446028a0d731f231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:07.609277 1863329 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key
	I0127 14:19:07.609328 1863329 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key
	I0127 14:19:07.609339 1863329 certs.go:256] generating profile certs ...
	I0127 14:19:07.609434 1863329 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/client.key
	I0127 14:19:07.609500 1863329 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.key.54b7a6ae
	I0127 14:19:07.609534 1863329 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.key
	I0127 14:19:07.609661 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem (1338 bytes)
	W0127 14:19:07.609700 1863329 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070_empty.pem, impossibly tiny 0 bytes
	I0127 14:19:07.609707 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:19:07.609732 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:19:07.609776 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:19:07.609807 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem (1675 bytes)
	I0127 14:19:07.609872 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem (1708 bytes)
	I0127 14:19:07.613389 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:19:07.649675 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:19:07.678577 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:19:07.707466 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 14:19:07.736820 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 14:19:07.764078 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:19:07.791040 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:19:07.817979 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:19:07.846978 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:19:07.869002 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem --> /usr/share/ca-certificates/1806070.pem (1338 bytes)
	I0127 14:19:07.892530 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /usr/share/ca-certificates/18060702.pem (1708 bytes)
	I0127 14:19:07.917138 1863329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:19:07.933638 1863329 ssh_runner.go:195] Run: openssl version
	I0127 14:19:07.939662 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:19:07.951267 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:07.955439 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:07.955494 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:07.961014 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:19:07.972145 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806070.pem && ln -fs /usr/share/ca-certificates/1806070.pem /etc/ssl/certs/1806070.pem"
	I0127 14:19:07.983517 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806070.pem
	I0127 14:19:07.987671 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:10 /usr/share/ca-certificates/1806070.pem
	I0127 14:19:07.987719 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806070.pem
	I0127 14:19:07.993079 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806070.pem /etc/ssl/certs/51391683.0"
	I0127 14:19:08.004139 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18060702.pem && ln -fs /usr/share/ca-certificates/18060702.pem /etc/ssl/certs/18060702.pem"
	I0127 14:19:08.015248 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18060702.pem
	I0127 14:19:08.019068 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:10 /usr/share/ca-certificates/18060702.pem
	I0127 14:19:08.019113 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18060702.pem
	I0127 14:19:08.024062 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18060702.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:19:08.033948 1863329 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:19:08.038251 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:19:08.043547 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:19:08.048804 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:19:08.054182 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:19:08.059290 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:19:08.064227 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:19:08.069315 1863329 kubeadm.go:392] StartCluster: {Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:19:08.069441 1863329 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 14:19:08.069490 1863329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:19:08.106407 1863329 cri.go:89] found id: "44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666"
	I0127 14:19:08.106434 1863329 cri.go:89] found id: "d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186"
	I0127 14:19:08.106441 1863329 cri.go:89] found id: "7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf"
	I0127 14:19:08.106446 1863329 cri.go:89] found id: "72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199"
	I0127 14:19:08.106450 1863329 cri.go:89] found id: "0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698"
	I0127 14:19:08.106455 1863329 cri.go:89] found id: "0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63"
	I0127 14:19:08.106459 1863329 cri.go:89] found id: "2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb"
	I0127 14:19:08.106463 1863329 cri.go:89] found id: "89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe"
	I0127 14:19:08.106467 1863329 cri.go:89] found id: ""
	I0127 14:19:08.106525 1863329 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 14:19:08.121718 1863329 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T14:19:08Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 14:19:08.121817 1863329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:19:08.131128 1863329 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 14:19:08.131152 1863329 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 14:19:08.131206 1863329 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 14:19:08.141323 1863329 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:19:08.142436 1863329 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-309688" does not appear in /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:19:08.143126 1863329 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-1798877/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-309688" cluster setting kubeconfig missing "newest-cni-309688" context setting]
	I0127 14:19:08.144090 1863329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:08.145938 1863329 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 14:19:08.155827 1863329 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.17
	I0127 14:19:08.155862 1863329 kubeadm.go:1160] stopping kube-system containers ...
	I0127 14:19:08.155887 1863329 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 14:19:08.155943 1863329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:19:08.191753 1863329 cri.go:89] found id: "44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666"
	I0127 14:19:08.191787 1863329 cri.go:89] found id: "d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186"
	I0127 14:19:08.191794 1863329 cri.go:89] found id: "7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf"
	I0127 14:19:08.191799 1863329 cri.go:89] found id: "72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199"
	I0127 14:19:08.191804 1863329 cri.go:89] found id: "0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698"
	I0127 14:19:08.191808 1863329 cri.go:89] found id: "0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63"
	I0127 14:19:08.191812 1863329 cri.go:89] found id: "2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb"
	I0127 14:19:08.191817 1863329 cri.go:89] found id: "89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe"
	I0127 14:19:08.191822 1863329 cri.go:89] found id: ""
	I0127 14:19:08.191829 1863329 cri.go:252] Stopping containers: [44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666 d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186 7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf 72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199 0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698 0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63 2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb 89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe]
	I0127 14:19:08.191909 1863329 ssh_runner.go:195] Run: which crictl
	I0127 14:19:08.195781 1863329 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666 d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186 7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf 72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199 0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698 0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63 2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb 89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe
	I0127 14:19:08.232200 1863329 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 14:19:08.248830 1863329 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:19:08.258186 1863329 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:19:08.258248 1863329 kubeadm.go:157] found existing configuration files:
	
	I0127 14:19:08.258301 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:19:08.266710 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:19:08.266787 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:19:08.276679 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:19:08.285327 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:19:08.285384 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:19:08.293919 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:19:08.302352 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:19:08.302466 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:19:08.314481 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:19:08.324318 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:19:08.324378 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:19:08.333925 1863329 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:19:08.343981 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:08.484856 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.407056 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.612649 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.691321 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:09.780355 1863329 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:19:09.780450 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:10.281441 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:10.780982 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:10.803824 1863329 api_server.go:72] duration metric: took 1.023465596s to wait for apiserver process to appear ...
	I0127 14:19:10.803860 1863329 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:19:10.803886 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:10.804578 1863329 api_server.go:269] stopped: https://192.168.72.17:8443/healthz: Get "https://192.168.72.17:8443/healthz": dial tcp 192.168.72.17:8443: connect: connection refused
	I0127 14:19:11.304934 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:11.906081 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:13.906183 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:13.554007 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:19:13.554040 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:19:13.554061 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:13.596380 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:19:13.596419 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:19:13.804894 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:13.819580 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:13.819610 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:14.304214 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:14.309598 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:14.309627 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:14.804236 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:14.809512 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:14.809551 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:15.304181 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:15.309590 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:15.309618 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:15.803958 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:15.813848 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:15.813901 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:16.304624 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:16.310313 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:19:16.310345 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:19:16.804590 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:16.809168 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 200:
	ok
	I0127 14:19:16.816088 1863329 api_server.go:141] control plane version: v1.32.1
	I0127 14:19:16.816123 1863329 api_server.go:131] duration metric: took 6.012253595s to wait for apiserver health ...
	I0127 14:19:16.816135 1863329 cni.go:84] Creating CNI manager for ""
	I0127 14:19:16.816144 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:19:16.817843 1863329 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:19:16.819038 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:19:16.829479 1863329 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:19:16.847164 1863329 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:19:16.857140 1863329 system_pods.go:59] 9 kube-system pods found
	I0127 14:19:16.857176 1863329 system_pods.go:61] "coredns-668d6bf9bc-f66f4" [b30ba9c6-eb6e-44c1-b389-96263bb405a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:16.857187 1863329 system_pods.go:61] "coredns-668d6bf9bc-pt6d2" [d8cf3c75-3646-40d8-8131-efd331e2cec7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:16.857198 1863329 system_pods.go:61] "etcd-newest-cni-309688" [f963f636-8186-4dd8-8263-b7bc29d15bc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 14:19:16.857210 1863329 system_pods.go:61] "kube-apiserver-newest-cni-309688" [90584ec0-2731-48e1-a2e6-5bef7b170386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 14:19:16.857219 1863329 system_pods.go:61] "kube-controller-manager-newest-cni-309688" [ef3b3e37-08d8-48aa-a55e-55d4c87c8189] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 14:19:16.857227 1863329 system_pods.go:61] "kube-proxy-8mwp9" [ebb658f3-eba2-4743-94cd-da996046bd02] Running
	I0127 14:19:16.857236 1863329 system_pods.go:61] "kube-scheduler-newest-cni-309688" [07bcbbe9-474a-4bc2-9f58-f889fa685754] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 14:19:16.857263 1863329 system_pods.go:61] "metrics-server-f79f97bbb-jw4m9" [85929ac8-142c-4bc7-90da-5c13f9ff3c0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:19:16.857277 1863329 system_pods.go:61] "storage-provisioner" [94820502-acf5-4297-8fc9-d4b4953b01ab] Running
	I0127 14:19:16.857287 1863329 system_pods.go:74] duration metric: took 10.102454ms to wait for pod list to return data ...
	I0127 14:19:16.857300 1863329 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:19:16.860835 1863329 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:19:16.860862 1863329 node_conditions.go:123] node cpu capacity is 2
	I0127 14:19:16.860886 1863329 node_conditions.go:105] duration metric: took 3.575582ms to run NodePressure ...
	I0127 14:19:16.860913 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:19:17.133479 1863329 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:19:17.144656 1863329 ops.go:34] apiserver oom_adj: -16
	I0127 14:19:17.144684 1863329 kubeadm.go:597] duration metric: took 9.013524206s to restartPrimaryControlPlane
	I0127 14:19:17.144695 1863329 kubeadm.go:394] duration metric: took 9.075390076s to StartCluster
	I0127 14:19:17.144715 1863329 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:17.144810 1863329 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:19:17.146498 1863329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:17.146819 1863329 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 14:19:17.146906 1863329 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:19:17.147019 1863329 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-309688"
	I0127 14:19:17.147042 1863329 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-309688"
	I0127 14:19:17.147041 1863329 addons.go:69] Setting default-storageclass=true in profile "newest-cni-309688"
	W0127 14:19:17.147054 1863329 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:19:17.147075 1863329 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-309688"
	I0127 14:19:17.147081 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:19:17.147079 1863329 addons.go:69] Setting dashboard=true in profile "newest-cni-309688"
	I0127 14:19:17.147063 1863329 addons.go:69] Setting metrics-server=true in profile "newest-cni-309688"
	I0127 14:19:17.147150 1863329 addons.go:238] Setting addon metrics-server=true in "newest-cni-309688"
	W0127 14:19:17.147164 1863329 addons.go:247] addon metrics-server should already be in state true
	I0127 14:19:17.147190 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.147088 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.147127 1863329 addons.go:238] Setting addon dashboard=true in "newest-cni-309688"
	W0127 14:19:17.147431 1863329 addons.go:247] addon dashboard should already be in state true
	I0127 14:19:17.147463 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.147523 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147558 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147565 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.147607 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.147687 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147718 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.147797 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.147810 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.148440 1863329 out.go:177] * Verifying Kubernetes components...
	I0127 14:19:17.149687 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:17.163903 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0127 14:19:17.164136 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0127 14:19:17.164313 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.164874 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.165122 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.165143 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.165396 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.165415 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.165676 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.165822 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.165886 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.166471 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.166526 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.175217 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I0127 14:19:17.175873 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.176532 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.176558 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.176979 1863329 addons.go:238] Setting addon default-storageclass=true in "newest-cni-309688"
	I0127 14:19:17.176997 1863329 main.go:141] libmachine: () Calling .GetMachineName
	W0127 14:19:17.177002 1863329 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:19:17.177080 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
	I0127 14:19:17.177500 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.177518 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.177541 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.177556 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.192916 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
	I0127 14:19:17.193458 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.194088 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.194110 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.194524 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.195179 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.195214 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.196238 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0127 14:19:17.196598 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.196918 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0127 14:19:17.197180 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.197200 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.197360 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.197480 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.197523 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35899
	I0127 14:19:17.197802 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.197813 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.198103 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.198164 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.198321 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.198535 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:19:17.198583 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:17.198888 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.198902 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.199305 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.199518 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.200369 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.201165 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.202593 1863329 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:19:17.202676 1863329 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:19:17.203794 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:19:17.203807 1863329 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:19:17.203824 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.203911 1863329 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:19:17.203926 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:19:17.203944 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.207477 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.207978 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.208029 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.208889 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.209077 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.209227 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.209363 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.216222 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.216592 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I0127 14:19:17.216814 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.216831 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.216961 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.217064 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.217256 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.217411 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.217422 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.217463 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.217578 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.217795 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.217839 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0127 14:19:17.218152 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.218203 1863329 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:17.218804 1863329 main.go:141] libmachine: Using API Version  1
	I0127 14:19:17.218816 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:17.219270 1863329 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:17.219480 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
	I0127 14:19:17.219969 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.220954 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
	I0127 14:19:17.221278 1863329 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:19:17.221291 1863329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:19:17.221312 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.221888 1863329 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:19:17.223572 1863329 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:19:17.225013 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:19:17.225038 1863329 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:19:17.225052 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
	I0127 14:19:17.225188 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.225554 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.225777 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.225825 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.226023 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.226118 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.226242 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.228625 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.228937 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
	I0127 14:19:17.228977 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
	I0127 14:19:17.229171 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
	I0127 14:19:17.229344 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
	I0127 14:19:17.229536 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
	I0127 14:19:17.229794 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
	I0127 14:19:17.331878 1863329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:19:17.351919 1863329 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:19:17.352011 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:19:17.365611 1863329 api_server.go:72] duration metric: took 218.744274ms to wait for apiserver process to appear ...
	I0127 14:19:17.365637 1863329 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:19:17.365655 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
	I0127 14:19:17.372023 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 200:
	ok
	I0127 14:19:17.373577 1863329 api_server.go:141] control plane version: v1.32.1
	I0127 14:19:17.373603 1863329 api_server.go:131] duration metric: took 7.959402ms to wait for apiserver health ...
	I0127 14:19:17.373612 1863329 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:19:17.382361 1863329 system_pods.go:59] 9 kube-system pods found
	I0127 14:19:17.382397 1863329 system_pods.go:61] "coredns-668d6bf9bc-f66f4" [b30ba9c6-eb6e-44c1-b389-96263bb405a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:17.382408 1863329 system_pods.go:61] "coredns-668d6bf9bc-pt6d2" [d8cf3c75-3646-40d8-8131-efd331e2cec7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:19:17.382420 1863329 system_pods.go:61] "etcd-newest-cni-309688" [f963f636-8186-4dd8-8263-b7bc29d15bc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 14:19:17.382430 1863329 system_pods.go:61] "kube-apiserver-newest-cni-309688" [90584ec0-2731-48e1-a2e6-5bef7b170386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 14:19:17.382453 1863329 system_pods.go:61] "kube-controller-manager-newest-cni-309688" [ef3b3e37-08d8-48aa-a55e-55d4c87c8189] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 14:19:17.382460 1863329 system_pods.go:61] "kube-proxy-8mwp9" [ebb658f3-eba2-4743-94cd-da996046bd02] Running
	I0127 14:19:17.382473 1863329 system_pods.go:61] "kube-scheduler-newest-cni-309688" [07bcbbe9-474a-4bc2-9f58-f889fa685754] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 14:19:17.382480 1863329 system_pods.go:61] "metrics-server-f79f97bbb-jw4m9" [85929ac8-142c-4bc7-90da-5c13f9ff3c0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:19:17.382486 1863329 system_pods.go:61] "storage-provisioner" [94820502-acf5-4297-8fc9-d4b4953b01ab] Running
	I0127 14:19:17.382496 1863329 system_pods.go:74] duration metric: took 8.875555ms to wait for pod list to return data ...
	I0127 14:19:17.382507 1863329 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:19:17.385289 1863329 default_sa.go:45] found service account: "default"
	I0127 14:19:17.385310 1863329 default_sa.go:55] duration metric: took 2.794486ms for default service account to be created ...
	I0127 14:19:17.385319 1863329 kubeadm.go:582] duration metric: took 238.459291ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 14:19:17.385341 1863329 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:19:17.388555 1863329 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:19:17.388583 1863329 node_conditions.go:123] node cpu capacity is 2
	I0127 14:19:17.388596 1863329 node_conditions.go:105] duration metric: took 3.249906ms to run NodePressure ...
	I0127 14:19:17.388610 1863329 start.go:241] waiting for startup goroutines ...
	I0127 14:19:17.418149 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:19:17.421312 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:19:17.421340 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:19:17.438395 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:19:17.454881 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:19:17.454907 1863329 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:19:17.463957 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:19:17.463983 1863329 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:19:17.511881 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:19:17.511918 1863329 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:19:17.526875 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:19:17.526902 1863329 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:19:17.564740 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:19:17.593901 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:19:17.593956 1863329 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:19:17.686229 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:19:17.686255 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:19:17.771605 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:19:17.771642 1863329 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:19:17.858960 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:19:17.858995 1863329 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:19:17.968615 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:19:17.968653 1863329 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:19:18.103281 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:19:18.103311 1863329 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:19:18.180707 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:19:18.180741 1863329 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:19:18.229422 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:19:19.526682 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.088226902s)
	I0127 14:19:19.526763 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.526777 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.526802 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962012351s)
	I0127 14:19:19.526851 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.526861 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.108674811s)
	I0127 14:19:19.526875 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.526891 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.526910 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.527161 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.527203 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.527212 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.527219 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.527227 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.528059 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528072 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528080 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.528088 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.528229 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528239 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528293 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.528342 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528349 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528356 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.528362 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.528502 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.528531 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.528538 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.528548 1863329 addons.go:479] Verifying addon metrics-server=true in "newest-cni-309688"
	I0127 14:19:19.528986 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.529006 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.529009 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.552242 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.552274 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.552631 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.552650 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.709148 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.47964575s)
	I0127 14:19:19.709210 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.709226 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.709584 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.709606 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.709613 1863329 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:19.709610 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
	I0127 14:19:19.709620 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
	I0127 14:19:19.709911 1863329 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:19.709925 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:19.711462 1863329 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-309688 addons enable metrics-server
	
	I0127 14:19:19.712846 1863329 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0127 14:19:19.714093 1863329 addons.go:514] duration metric: took 2.567193619s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0127 14:19:19.714146 1863329 start.go:246] waiting for cluster config update ...
	I0127 14:19:19.714163 1863329 start.go:255] writing updated cluster config ...
	I0127 14:19:19.714515 1863329 ssh_runner.go:195] Run: rm -f paused
	I0127 14:19:19.771292 1863329 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:19:19.773125 1863329 out.go:177] * Done! kubectl is now configured to use "newest-cni-309688" cluster and "default" namespace by default
	I0127 14:19:16.407410 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:18.408328 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:20.905706 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:22.906390 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:25.405847 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:27.406081 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:29.406653 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:31.905101 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:33.906032 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:36.406416 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:38.905541 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:41.405451 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:43.405883 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:45.905497 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:47.905917 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:50.405296 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:52.405989 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:54.905953 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:56.906021 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:19:58.906598 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:01.405909 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:03.406128 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:05.906092 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:08.405216 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:10.405449 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:12.905583 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:14.399935 1860751 pod_ready.go:82] duration metric: took 4m0.000530283s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" ...
	E0127 14:20:14.399966 1860751 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 14:20:14.399992 1860751 pod_ready.go:39] duration metric: took 4m31.410913398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:14.400032 1860751 kubeadm.go:597] duration metric: took 5m29.594675564s to restartPrimaryControlPlane
	W0127 14:20:14.400141 1860751 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 14:20:14.400175 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 14:20:15.909704 1860751 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.509493932s)
	I0127 14:20:15.909782 1860751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:20:15.925857 1860751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:20:15.935803 1860751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:20:15.946508 1860751 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:20:15.946527 1860751 kubeadm.go:157] found existing configuration files:
	
	I0127 14:20:15.946566 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 14:20:15.956633 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:20:15.956690 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:20:15.966965 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 14:20:15.984740 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:20:15.984801 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:20:15.995541 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 14:20:16.005543 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:20:16.005605 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:20:16.015855 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 14:20:16.025594 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:20:16.025640 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:20:16.035989 1860751 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:20:16.197395 1860751 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:20:24.074171 1860751 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:20:24.074259 1860751 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:20:24.074369 1860751 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:20:24.074528 1860751 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:20:24.074657 1860751 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:20:24.074731 1860751 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:20:24.076292 1860751 out.go:235]   - Generating certificates and keys ...
	I0127 14:20:24.076373 1860751 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:20:24.076450 1860751 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:20:24.076532 1860751 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:20:24.076585 1860751 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:20:24.076644 1860751 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:20:24.076713 1860751 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:20:24.076800 1860751 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:20:24.076884 1860751 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:20:24.076992 1860751 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:20:24.077103 1860751 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:20:24.077169 1860751 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:20:24.077243 1860751 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:20:24.077289 1860751 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:20:24.077349 1860751 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:20:24.077397 1860751 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:20:24.077468 1860751 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:20:24.077537 1860751 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:20:24.077610 1860751 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:20:24.077669 1860751 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:20:24.078852 1860751 out.go:235]   - Booting up control plane ...
	I0127 14:20:24.078965 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:20:24.079055 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:20:24.079140 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:20:24.079285 1860751 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:20:24.079429 1860751 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:20:24.079489 1860751 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:20:24.079690 1860751 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:20:24.079833 1860751 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:20:24.079921 1860751 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.61135ms
	I0127 14:20:24.080007 1860751 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:20:24.080110 1860751 kubeadm.go:310] [api-check] The API server is healthy after 5.001239504s
	I0127 14:20:24.080256 1860751 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:20:24.080387 1860751 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:20:24.080441 1860751 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:20:24.080637 1860751 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-212529 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:20:24.080711 1860751 kubeadm.go:310] [bootstrap-token] Using token: pxjq5d.hk6ws8nooc0hkr03
	I0127 14:20:24.082018 1860751 out.go:235]   - Configuring RBAC rules ...
	I0127 14:20:24.082176 1860751 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:20:24.082314 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:20:24.082518 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:20:24.082703 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:20:24.082889 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:20:24.083015 1860751 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:20:24.083173 1860751 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:20:24.083250 1860751 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:20:24.083301 1860751 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:20:24.083311 1860751 kubeadm.go:310] 
	I0127 14:20:24.083396 1860751 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:20:24.083407 1860751 kubeadm.go:310] 
	I0127 14:20:24.083513 1860751 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:20:24.083522 1860751 kubeadm.go:310] 
	I0127 14:20:24.083558 1860751 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:20:24.083655 1860751 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:20:24.083734 1860751 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:20:24.083743 1860751 kubeadm.go:310] 
	I0127 14:20:24.083802 1860751 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:20:24.083810 1860751 kubeadm.go:310] 
	I0127 14:20:24.083852 1860751 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:20:24.083858 1860751 kubeadm.go:310] 
	I0127 14:20:24.083921 1860751 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:20:24.084043 1860751 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:20:24.084140 1860751 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:20:24.084149 1860751 kubeadm.go:310] 
	I0127 14:20:24.084263 1860751 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:20:24.084383 1860751 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:20:24.084400 1860751 kubeadm.go:310] 
	I0127 14:20:24.084497 1860751 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token pxjq5d.hk6ws8nooc0hkr03 \
	I0127 14:20:24.084584 1860751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e \
	I0127 14:20:24.084604 1860751 kubeadm.go:310] 	--control-plane 
	I0127 14:20:24.084610 1860751 kubeadm.go:310] 
	I0127 14:20:24.084679 1860751 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:20:24.084685 1860751 kubeadm.go:310] 
	I0127 14:20:24.084750 1860751 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pxjq5d.hk6ws8nooc0hkr03 \
	I0127 14:20:24.084894 1860751 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e 
	I0127 14:20:24.084923 1860751 cni.go:84] Creating CNI manager for ""
	I0127 14:20:24.084937 1860751 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:20:24.086257 1860751 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:20:24.087300 1860751 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:20:24.097744 1860751 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:20:24.115867 1860751 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:20:24.115958 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:24.115962 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-212529 minikube.k8s.io/updated_at=2025_01_27T14_20_24_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=default-k8s-diff-port-212529 minikube.k8s.io/primary=true
	I0127 14:20:24.324045 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:24.324042 1860751 ops.go:34] apiserver oom_adj: -16
	I0127 14:20:24.824528 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:25.324196 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:25.824971 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:26.324285 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:26.825007 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:27.324812 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:27.824252 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:28.324496 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:20:28.413845 1860751 kubeadm.go:1113] duration metric: took 4.297974897s to wait for elevateKubeSystemPrivileges
	I0127 14:20:28.413890 1860751 kubeadm.go:394] duration metric: took 5m43.681075591s to StartCluster
	I0127 14:20:28.413911 1860751 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:20:28.414029 1860751 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:20:28.416135 1860751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:20:28.416434 1860751 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.145 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 14:20:28.416580 1860751 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:20:28.416710 1860751 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416715 1860751 config.go:182] Loaded profile config "default-k8s-diff-port-212529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:20:28.416736 1860751 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.416745 1860751 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:20:28.416742 1860751 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416759 1860751 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.416785 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.416797 1860751 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.416807 1860751 addons.go:247] addon dashboard should already be in state true
	I0127 14:20:28.416843 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.417198 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417233 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.417240 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417275 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.416772 1860751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-212529"
	I0127 14:20:28.416777 1860751 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-212529"
	I0127 14:20:28.417322 1860751 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.417337 1860751 addons.go:247] addon metrics-server should already be in state true
	I0127 14:20:28.417560 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.417900 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.417916 1860751 out.go:177] * Verifying Kubernetes components...
	I0127 14:20:28.417955 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.417963 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.418005 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.419061 1860751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:20:28.434949 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
	I0127 14:20:28.435505 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.436082 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.436114 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.436521 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.436752 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.437523 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0127 14:20:28.437697 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I0127 14:20:28.438072 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.438417 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.438657 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.438682 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.438906 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.438929 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.439056 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.439281 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.439489 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39481
	I0127 14:20:28.439624 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.439660 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.439804 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.439846 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.439944 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.440409 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.440432 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.440811 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.441377 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.441420 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.441785 1860751 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-212529"
	W0127 14:20:28.441804 1860751 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:20:28.441836 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
	I0127 14:20:28.442074 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.442111 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.460558 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
	I0127 14:20:28.461043 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38533
	I0127 14:20:28.461200 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.461461 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.461725 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.461749 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.461814 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0127 14:20:28.462061 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.462083 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.462286 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.462330 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.462485 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.462605 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.462762 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.462775 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.462832 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.463228 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.463817 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
	I0127 14:20:28.463862 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:20:28.464659 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.465253 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.466108 1860751 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:20:28.466667 1860751 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:20:28.467300 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:20:28.467316 1860751 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:20:28.467333 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.469055 1860751 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:20:28.469287 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38797
	I0127 14:20:28.469629 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.470009 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:20:28.470027 1860751 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:20:28.470055 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.470158 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.470180 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.470774 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.470967 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.471164 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.471781 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.471814 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.472153 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.472327 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.472488 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.472639 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.473502 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.473853 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.474311 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.474338 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.474488 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.474652 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.474805 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.474896 1860751 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:20:28.474964 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.475898 1860751 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:20:28.475916 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:20:28.475933 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.478521 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.478927 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.478950 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.479131 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.479325 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.479479 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.479622 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.482246 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I0127 14:20:28.482637 1860751 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:20:28.483047 1860751 main.go:141] libmachine: Using API Version  1
	I0127 14:20:28.483068 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:20:28.483409 1860751 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:20:28.483542 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
	I0127 14:20:28.484999 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
	I0127 14:20:28.485241 1860751 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:20:28.485259 1860751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:20:28.485276 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
	I0127 14:20:28.488061 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.488402 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
	I0127 14:20:28.488429 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
	I0127 14:20:28.488581 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
	I0127 14:20:28.488725 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
	I0127 14:20:28.488858 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
	I0127 14:20:28.489030 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
	I0127 14:20:28.646865 1860751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:20:28.672532 1860751 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-212529" to be "Ready" ...
	I0127 14:20:28.703176 1860751 node_ready.go:49] node "default-k8s-diff-port-212529" has status "Ready":"True"
	I0127 14:20:28.703197 1860751 node_ready.go:38] duration metric: took 30.636379ms for node "default-k8s-diff-port-212529" to be "Ready" ...
	I0127 14:20:28.703206 1860751 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:28.710494 1860751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:28.817820 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:20:28.817849 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:20:28.837871 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:20:28.851072 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:20:28.851107 1860751 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:20:28.852529 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:20:28.858946 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:20:28.858978 1860751 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:20:28.897376 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:20:28.897409 1860751 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:20:28.944458 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:20:28.944489 1860751 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:20:28.996770 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:20:28.996799 1860751 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:20:29.041836 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:20:29.066199 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:20:29.066234 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:20:29.191066 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:20:29.191092 1860751 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:20:29.292937 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:20:29.292970 1860751 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:20:29.324574 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:20:29.324605 1860751 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:20:29.381589 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:20:29.381618 1860751 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:20:29.579396 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:20:29.579421 1860751 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:20:29.730806 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:20:30.332634 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.480056609s)
	I0127 14:20:30.332719 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.332740 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.332753 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.494842628s)
	I0127 14:20:30.332799 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.332812 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333060 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333080 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.333120 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.333128 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333246 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333271 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.333280 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.333287 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.333331 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Closing plugin on server side
	I0127 14:20:30.333499 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.333513 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.335273 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.335291 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.402574 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.402607 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.402929 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.402951 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.597814 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.555933063s)
	I0127 14:20:30.597873 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.597890 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.598223 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.598244 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.598254 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:30.598262 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:30.598523 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:30.598545 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:30.598558 1860751 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-212529"
	I0127 14:20:30.720235 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:31.251992 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.52112686s)
	I0127 14:20:31.252076 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:31.252099 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:31.252456 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:31.252477 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:31.252487 1860751 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:31.252495 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
	I0127 14:20:31.252788 1860751 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:31.252797 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Closing plugin on server side
	I0127 14:20:31.252810 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:31.254461 1860751 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-212529 addons enable metrics-server
	
	I0127 14:20:31.255681 1860751 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 14:20:31.256922 1860751 addons.go:514] duration metric: took 2.840355251s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 14:20:33.216592 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:35.217244 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:37.731702 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.731733 1860751 pod_ready.go:82] duration metric: took 9.021206919s for pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.731747 1860751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.761047 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.761074 1860751 pod_ready.go:82] duration metric: took 29.318136ms for pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.761084 1860751 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.772463 1860751 pod_ready.go:93] pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.772491 1860751 pod_ready.go:82] duration metric: took 11.399303ms for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.772504 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.780269 1860751 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.780294 1860751 pod_ready.go:82] duration metric: took 7.782307ms for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.780306 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.785276 1860751 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.785304 1860751 pod_ready.go:82] duration metric: took 4.986421ms for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.785315 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5fmd" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.114939 1860751 pod_ready.go:93] pod "kube-proxy-f5fmd" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:38.114969 1860751 pod_ready.go:82] duration metric: took 329.644964ms for pod "kube-proxy-f5fmd" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.114981 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.515806 1860751 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:38.515832 1860751 pod_ready.go:82] duration metric: took 400.844808ms for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:38.515841 1860751 pod_ready.go:39] duration metric: took 9.812625577s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:38.515859 1860751 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:20:38.515918 1860751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:20:38.534333 1860751 api_server.go:72] duration metric: took 10.117851719s to wait for apiserver process to appear ...
	I0127 14:20:38.534364 1860751 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:20:38.534390 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
	I0127 14:20:38.540410 1860751 api_server.go:279] https://192.168.50.145:8444/healthz returned 200:
	ok
	I0127 14:20:38.541651 1860751 api_server.go:141] control plane version: v1.32.1
	I0127 14:20:38.541674 1860751 api_server.go:131] duration metric: took 7.30288ms to wait for apiserver health ...
	I0127 14:20:38.541685 1860751 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:20:38.725366 1860751 system_pods.go:59] 9 kube-system pods found
	I0127 14:20:38.725397 1860751 system_pods.go:61] "coredns-668d6bf9bc-g77l4" [4457b047-3339-455e-ab06-15a1e4d7a95f] Running
	I0127 14:20:38.725402 1860751 system_pods.go:61] "coredns-668d6bf9bc-gwfcp" [d557581e-b74a-482d-9c8c-12e1b51d11d5] Running
	I0127 14:20:38.725406 1860751 system_pods.go:61] "etcd-default-k8s-diff-port-212529" [1e347129-845b-4c34-831c-e056cccc90f7] Running
	I0127 14:20:38.725410 1860751 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-212529" [1472d317-bd0d-4957-a955-d69eb5339d2a] Running
	I0127 14:20:38.725414 1860751 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-212529" [0e5e7440-7389-4bc8-9ee5-0e8041edef25] Running
	I0127 14:20:38.725417 1860751 system_pods.go:61] "kube-proxy-f5fmd" [a08f6d90-467b-4972-8c03-d62d07e108e5] Running
	I0127 14:20:38.725422 1860751 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-212529" [34188644-73d6-4567-856a-895cef0abac8] Running
	I0127 14:20:38.725431 1860751 system_pods.go:61] "metrics-server-f79f97bbb-gpkgd" [ec65f4da-1a84-4dab-9969-3ed09e9fdce2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:20:38.725436 1860751 system_pods.go:61] "storage-provisioner" [72ed4f2a-f894-4246-8596-b02befc5fde4] Running
	I0127 14:20:38.725448 1860751 system_pods.go:74] duration metric: took 183.756587ms to wait for pod list to return data ...
	I0127 14:20:38.725461 1860751 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:20:38.916064 1860751 default_sa.go:45] found service account: "default"
	I0127 14:20:38.916100 1860751 default_sa.go:55] duration metric: took 190.628425ms for default service account to be created ...
	I0127 14:20:38.916114 1860751 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:20:39.121453 1860751 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	e13c5e62966fa       523cad1a4df73       4 minutes ago       Exited              dashboard-metrics-scraper   8                   460a7d2ae033d       dashboard-metrics-scraper-86c6bf9756-mhg9r
	8647d690e7fb1       07655ddf2eebe       20 minutes ago      Running             kubernetes-dashboard        0                   2b3c90a0eda48       kubernetes-dashboard-7779f9b69b-jbbb2
	0a908e4ccd09a       6e38f40d628db       20 minutes ago      Running             storage-provisioner         0                   90a0507185894       storage-provisioner
	dabe1587c2d89       c69fa2e9cbf5f       20 minutes ago      Running             coredns                     0                   ea2bbd706108b       coredns-668d6bf9bc-gwfcp
	e21dff1d4394b       c69fa2e9cbf5f       20 minutes ago      Running             coredns                     0                   d7d9eb566b64b       coredns-668d6bf9bc-g77l4
	7aa48ce854b71       e29f9c7391fd9       20 minutes ago      Running             kube-proxy                  0                   258eb93682096       kube-proxy-f5fmd
	f1fd71ece07b9       a9e7e6b294baf       20 minutes ago      Running             etcd                        2                   04adbbe810006       etcd-default-k8s-diff-port-212529
	85171d8307aad       2b0d6572d062c       20 minutes ago      Running             kube-scheduler              2                   4cd390813fdac       kube-scheduler-default-k8s-diff-port-212529
	9815148e4cedf       019ee182b58e2       20 minutes ago      Running             kube-controller-manager     3                   d7b1539a09b93       kube-controller-manager-default-k8s-diff-port-212529
	76fd3defcc1c6       95c0bda56fc4d       20 minutes ago      Running             kube-apiserver              3                   36f3f79a3136b       kube-apiserver-default-k8s-diff-port-212529
	
	
	==> containerd <==
	Jan 27 14:31:21 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:21.329496858Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 14:31:21 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:21.331599112Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 14:31:21 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:21.331695915Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 14:31:46 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:46.320666543Z" level=info msg="CreateContainer within sandbox \"460a7d2ae033d8f2b54294cd8ebaa92b59607b69d042b6884eebd57ab5ea50dc\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:7,}"
	Jan 27 14:31:46 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:46.345226490Z" level=info msg="CreateContainer within sandbox \"460a7d2ae033d8f2b54294cd8ebaa92b59607b69d042b6884eebd57ab5ea50dc\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:7,} returns container id \"846c584dc1214194c9b0d2a677450a62107034d5dace7901f0d32356ec773420\""
	Jan 27 14:31:46 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:46.346955216Z" level=info msg="StartContainer for \"846c584dc1214194c9b0d2a677450a62107034d5dace7901f0d32356ec773420\""
	Jan 27 14:31:46 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:46.424284796Z" level=info msg="StartContainer for \"846c584dc1214194c9b0d2a677450a62107034d5dace7901f0d32356ec773420\" returns successfully"
	Jan 27 14:31:46 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:46.479785697Z" level=info msg="shim disconnected" id=846c584dc1214194c9b0d2a677450a62107034d5dace7901f0d32356ec773420 namespace=k8s.io
	Jan 27 14:31:46 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:46.479913916Z" level=warning msg="cleaning up after shim disconnected" id=846c584dc1214194c9b0d2a677450a62107034d5dace7901f0d32356ec773420 namespace=k8s.io
	Jan 27 14:31:46 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:46.479951116Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 14:31:47 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:47.197983475Z" level=info msg="RemoveContainer for \"b1abbab197f3272490267fea14004f459bffda91810ce1722ab276cdb0bb8a5a\""
	Jan 27 14:31:47 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:31:47.207865376Z" level=info msg="RemoveContainer for \"b1abbab197f3272490267fea14004f459bffda91810ce1722ab276cdb0bb8a5a\" returns successfully"
	Jan 27 14:36:27 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:27.318197492Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 14:36:27 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:27.330215596Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 14:36:27 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:27.332572438Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 14:36:27 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:27.332632125Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 14:36:50 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:50.320183445Z" level=info msg="CreateContainer within sandbox \"460a7d2ae033d8f2b54294cd8ebaa92b59607b69d042b6884eebd57ab5ea50dc\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 14:36:50 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:50.343962204Z" level=info msg="CreateContainer within sandbox \"460a7d2ae033d8f2b54294cd8ebaa92b59607b69d042b6884eebd57ab5ea50dc\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58\""
	Jan 27 14:36:50 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:50.345495485Z" level=info msg="StartContainer for \"e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58\""
	Jan 27 14:36:50 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:50.421994543Z" level=info msg="StartContainer for \"e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58\" returns successfully"
	Jan 27 14:36:50 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:50.468460210Z" level=info msg="shim disconnected" id=e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58 namespace=k8s.io
	Jan 27 14:36:50 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:50.468712158Z" level=warning msg="cleaning up after shim disconnected" id=e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58 namespace=k8s.io
	Jan 27 14:36:50 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:50.468813866Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 14:36:50 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:50.886380036Z" level=info msg="RemoveContainer for \"846c584dc1214194c9b0d2a677450a62107034d5dace7901f0d32356ec773420\""
	Jan 27 14:36:50 default-k8s-diff-port-212529 containerd[564]: time="2025-01-27T14:36:50.897605928Z" level=info msg="RemoveContainer for \"846c584dc1214194c9b0d2a677450a62107034d5dace7901f0d32356ec773420\" returns successfully"
	
	
	==> coredns [dabe1587c2d89e26066c01f3b19121f1f1d0d41f1734982c5538480dee8b2eb5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e21dff1d4394b64aba65c2db6dcf7a6d1b07dfd6bcccccc44c56bec06a1141bd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-212529
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-212529
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d
	                    minikube.k8s.io/name=default-k8s-diff-port-212529
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T14_20_24_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 14:20:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-212529
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:40:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:39:45 +0000   Mon, 27 Jan 2025 14:20:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:39:45 +0000   Mon, 27 Jan 2025 14:20:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:39:45 +0000   Mon, 27 Jan 2025 14:20:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:39:45 +0000   Mon, 27 Jan 2025 14:20:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.145
	  Hostname:    default-k8s-diff-port-212529
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9cf9d51e813c40478d782a56b8a8bc01
	  System UUID:                9cf9d51e-813c-4047-8d78-2a56b8a8bc01
	  Boot ID:                    a85d954b-a093-442a-bc09-815e33e63907
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-g77l4                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 coredns-668d6bf9bc-gwfcp                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-default-k8s-diff-port-212529                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-212529             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-212529    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-f5fmd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-212529             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-f79f97bbb-gpkgd                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-mhg9r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-jbbb2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 20m   kube-proxy       
	  Normal  Starting                 20m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m   kubelet          Node default-k8s-diff-port-212529 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m   kubelet          Node default-k8s-diff-port-212529 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m   kubelet          Node default-k8s-diff-port-212529 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m   node-controller  Node default-k8s-diff-port-212529 event: Registered Node default-k8s-diff-port-212529 in Controller
	
	
	==> dmesg <==
	[  +0.037116] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.931518] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.122070] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.577526] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.442796] systemd-fstab-generator[485]: Ignoring "noauto" option for root device
	[  +0.067125] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059343] systemd-fstab-generator[497]: Ignoring "noauto" option for root device
	[  +0.188366] systemd-fstab-generator[511]: Ignoring "noauto" option for root device
	[  +0.136040] systemd-fstab-generator[523]: Ignoring "noauto" option for root device
	[  +0.306466] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +1.798388] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +2.189728] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.803277] kauditd_printk_skb: 225 callbacks suppressed
	[Jan27 14:15] kauditd_printk_skb: 20 callbacks suppressed
	[ +23.180414] kauditd_printk_skb: 2 callbacks suppressed
	[Jan27 14:16] kauditd_printk_skb: 87 callbacks suppressed
	[Jan27 14:20] systemd-fstab-generator[3211]: Ignoring "noauto" option for root device
	[  +6.040812] systemd-fstab-generator[3588]: Ignoring "noauto" option for root device
	[  +0.070710] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.323845] systemd-fstab-generator[3688]: Ignoring "noauto" option for root device
	[  +0.119807] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.638702] kauditd_printk_skb: 108 callbacks suppressed
	[  +8.420927] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [f1fd71ece07b9ff0579ba9d39af1c88143fb006d055f856ebee04e3a6339d039] <==
	{"level":"info","ts":"2025-01-27T14:20:19.054510Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:20:19.056855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T14:20:19.057568Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"282c0c59245c379b","local-member-id":"1b7110dbe02700b5","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T14:20:19.065851Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T14:20:19.066034Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T14:20:41.664127Z","caller":"traceutil/trace.go:171","msg":"trace[542720339] linearizableReadLoop","detail":"{readStateIndex:519; appliedIndex:519; }","duration":"176.87614ms","start":"2025-01-27T14:20:41.486431Z","end":"2025-01-27T14:20:41.663307Z","steps":["trace[542720339] 'read index received'  (duration: 176.869941ms)","trace[542720339] 'applied index is now lower than readState.Index'  (duration: 5.434µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:20:41.664227Z","caller":"traceutil/trace.go:171","msg":"trace[1112952571] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"314.844934ms","start":"2025-01-27T14:20:41.348124Z","end":"2025-01-27T14:20:41.662969Z","steps":["trace[1112952571] 'process raft request'  (duration: 314.732641ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:20:41.666596Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.523482ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:20:41.666678Z","caller":"traceutil/trace.go:171","msg":"trace[1441153391] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:506; }","duration":"180.243025ms","start":"2025-01-27T14:20:41.486426Z","end":"2025-01-27T14:20:41.666669Z","steps":["trace[1441153391] 'agreement among raft nodes before linearized reading'  (duration: 178.874049ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:20:41.667753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.641142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:20:41.667798Z","caller":"traceutil/trace.go:171","msg":"trace[173226069] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:507; }","duration":"171.707307ms","start":"2025-01-27T14:20:41.496083Z","end":"2025-01-27T14:20:41.667791Z","steps":["trace[173226069] 'agreement among raft nodes before linearized reading'  (duration: 171.607996ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:20:41.674936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:20:41.348107Z","time spent":"317.477518ms","remote":"127.0.0.1:35462","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:503 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-01-27T14:20:42.532292Z","caller":"traceutil/trace.go:171","msg":"trace[1247053970] linearizableReadLoop","detail":"{readStateIndex:522; appliedIndex:521; }","duration":"132.199372ms","start":"2025-01-27T14:20:42.400078Z","end":"2025-01-27T14:20:42.532277Z","steps":["trace[1247053970] 'read index received'  (duration: 129.608189ms)","trace[1247053970] 'applied index is now lower than readState.Index'  (duration: 2.590577ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:20:42.532422Z","caller":"traceutil/trace.go:171","msg":"trace[1008232080] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"199.619887ms","start":"2025-01-27T14:20:42.332795Z","end":"2025-01-27T14:20:42.532415Z","steps":["trace[1008232080] 'process raft request'  (duration: 196.93325ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:20:42.532739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.639488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.145\" limit:1 ","response":"range_response_count:1 size:134"}
	{"level":"info","ts":"2025-01-27T14:20:42.532775Z","caller":"traceutil/trace.go:171","msg":"trace[1681257457] range","detail":"{range_begin:/registry/masterleases/192.168.50.145; range_end:; response_count:1; response_revision:509; }","duration":"132.713469ms","start":"2025-01-27T14:20:42.400053Z","end":"2025-01-27T14:20:42.532767Z","steps":["trace[1681257457] 'agreement among raft nodes before linearized reading'  (duration: 132.569546ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:30:19.106373Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":837}
	{"level":"info","ts":"2025-01-27T14:30:19.135049Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":837,"took":"28.033805ms","hash":703818500,"current-db-size-bytes":2949120,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2949120,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-27T14:30:19.135124Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":703818500,"revision":837,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T14:35:19.115826Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1087}
	{"level":"info","ts":"2025-01-27T14:35:19.119900Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1087,"took":"3.411932ms","hash":2217159035,"current-db-size-bytes":2949120,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1740800,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T14:35:19.120075Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2217159035,"revision":1087,"compact-revision":837}
	{"level":"info","ts":"2025-01-27T14:40:19.122926Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1338}
	{"level":"info","ts":"2025-01-27T14:40:19.127119Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1338,"took":"3.42759ms","hash":2449238663,"current-db-size-bytes":2949120,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1744896,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T14:40:19.127169Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2449238663,"revision":1338,"compact-revision":1087}
	
	
	==> kernel <==
	 14:40:50 up 26 min,  0 users,  load average: 0.02, 0.09, 0.13
	Linux default-k8s-diff-port-212529 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [76fd3defcc1c60b1e4db5ada877f8d82ab2c6eca8a3d9bc7fbab367ecefda4e4] <==
	I0127 14:36:21.934856       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 14:36:21.934910       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 14:38:21.935469       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:38:21.935708       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 14:38:21.935942       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:38:21.936152       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 14:38:21.936899       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 14:38:21.937950       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 14:40:20.935150       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:40:20.935634       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 14:40:21.937308       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:40:21.937440       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 14:40:21.937309       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:40:21.937501       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 14:40:21.938784       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 14:40:21.938804       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9815148e4cedffc2bff95443a5275a8498c6aa39c358b49f3e9c7388dc1a663a] <==
	E0127 14:35:57.710562       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:35:57.832011       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:36:27.716873       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:36:27.847410       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 14:36:42.332472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="311.216µs"
	I0127 14:36:50.898866       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="51.44µs"
	I0127 14:36:53.340837       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="101.801µs"
	I0127 14:36:57.061032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="49.816µs"
	E0127 14:36:57.723175       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:36:57.855413       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:37:27.730700       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:37:27.864731       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:37:57.737967       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:37:57.872482       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:38:27.744426       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:38:27.885048       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:38:57.750148       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:38:57.892142       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:39:27.757313       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:39:27.900216       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 14:39:45.792943       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-212529"
	E0127 14:39:57.763725       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:39:57.910081       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:40:27.769868       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:40:27.927061       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [7aa48ce854b71cb6749e31bc3069e144140a065408e6975cd3c2a0b67705257f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:20:30.041826       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:20:30.057564       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.145"]
	E0127 14:20:30.057708       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:20:30.472301       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:20:30.472392       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:20:30.474547       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:20:30.506650       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:20:30.509423       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:20:30.513130       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:20:30.517772       1 config.go:199] "Starting service config controller"
	I0127 14:20:30.517931       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:20:30.518253       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:20:30.518263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:20:30.519657       1 config.go:329] "Starting node config controller"
	I0127 14:20:30.519665       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:20:30.618268       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:20:30.618309       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:20:30.620180       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [85171d8307aad072eb2f3d070d003dd797b58fca1f3ccc882c77bf416c3fe482] <==
	W0127 14:20:20.945458       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 14:20:20.945587       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:20.945675       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 14:20:20.945770       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:21.764391       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 14:20:21.764650       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:21.815283       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 14:20:21.815539       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:21.815857       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 14:20:21.815954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:21.837024       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 14:20:21.837292       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:21.839608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 14:20:21.839657       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:21.843394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 14:20:21.844634       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:21.940509       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 14:20:21.940557       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:21.959958       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 14:20:21.960143       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:22.030475       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 14:20:22.030531       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:20:22.205052       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 14:20:22.205104       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 14:20:25.033020       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:39:25 default-k8s-diff-port-212529 kubelet[3595]: I0127 14:39:25.317125    3595 scope.go:117] "RemoveContainer" containerID="e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58"
	Jan 27 14:39:25 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:39:25.317410    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-mhg9r_kubernetes-dashboard(aafa51e8-ca5f-4384-bd3c-99b233d03a07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-mhg9r" podUID="aafa51e8-ca5f-4384-bd3c-99b233d03a07"
	Jan 27 14:39:35 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:39:35.318774    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-gpkgd" podUID="ec65f4da-1a84-4dab-9969-3ed09e9fdce2"
	Jan 27 14:39:39 default-k8s-diff-port-212529 kubelet[3595]: I0127 14:39:39.317176    3595 scope.go:117] "RemoveContainer" containerID="e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58"
	Jan 27 14:39:39 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:39:39.317826    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-mhg9r_kubernetes-dashboard(aafa51e8-ca5f-4384-bd3c-99b233d03a07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-mhg9r" podUID="aafa51e8-ca5f-4384-bd3c-99b233d03a07"
	Jan 27 14:39:50 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:39:50.317389    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-gpkgd" podUID="ec65f4da-1a84-4dab-9969-3ed09e9fdce2"
	Jan 27 14:39:51 default-k8s-diff-port-212529 kubelet[3595]: I0127 14:39:51.316010    3595 scope.go:117] "RemoveContainer" containerID="e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58"
	Jan 27 14:39:51 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:39:51.316276    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-mhg9r_kubernetes-dashboard(aafa51e8-ca5f-4384-bd3c-99b233d03a07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-mhg9r" podUID="aafa51e8-ca5f-4384-bd3c-99b233d03a07"
	Jan 27 14:40:01 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:40:01.317002    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-gpkgd" podUID="ec65f4da-1a84-4dab-9969-3ed09e9fdce2"
	Jan 27 14:40:04 default-k8s-diff-port-212529 kubelet[3595]: I0127 14:40:04.316852    3595 scope.go:117] "RemoveContainer" containerID="e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58"
	Jan 27 14:40:04 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:40:04.317308    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-mhg9r_kubernetes-dashboard(aafa51e8-ca5f-4384-bd3c-99b233d03a07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-mhg9r" podUID="aafa51e8-ca5f-4384-bd3c-99b233d03a07"
	Jan 27 14:40:16 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:40:16.317472    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-gpkgd" podUID="ec65f4da-1a84-4dab-9969-3ed09e9fdce2"
	Jan 27 14:40:18 default-k8s-diff-port-212529 kubelet[3595]: I0127 14:40:18.316666    3595 scope.go:117] "RemoveContainer" containerID="e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58"
	Jan 27 14:40:18 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:40:18.316851    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-mhg9r_kubernetes-dashboard(aafa51e8-ca5f-4384-bd3c-99b233d03a07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-mhg9r" podUID="aafa51e8-ca5f-4384-bd3c-99b233d03a07"
	Jan 27 14:40:23 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:40:23.354474    3595 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 14:40:23 default-k8s-diff-port-212529 kubelet[3595]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 14:40:23 default-k8s-diff-port-212529 kubelet[3595]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 14:40:23 default-k8s-diff-port-212529 kubelet[3595]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 14:40:23 default-k8s-diff-port-212529 kubelet[3595]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 14:40:27 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:40:27.317440    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-gpkgd" podUID="ec65f4da-1a84-4dab-9969-3ed09e9fdce2"
	Jan 27 14:40:31 default-k8s-diff-port-212529 kubelet[3595]: I0127 14:40:31.317125    3595 scope.go:117] "RemoveContainer" containerID="e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58"
	Jan 27 14:40:31 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:40:31.317663    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-mhg9r_kubernetes-dashboard(aafa51e8-ca5f-4384-bd3c-99b233d03a07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-mhg9r" podUID="aafa51e8-ca5f-4384-bd3c-99b233d03a07"
	Jan 27 14:40:39 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:40:39.318956    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-gpkgd" podUID="ec65f4da-1a84-4dab-9969-3ed09e9fdce2"
	Jan 27 14:40:42 default-k8s-diff-port-212529 kubelet[3595]: I0127 14:40:42.317051    3595 scope.go:117] "RemoveContainer" containerID="e13c5e62966fa665378a55079088bc6cdd377f7f6144902146a7439752633d58"
	Jan 27 14:40:42 default-k8s-diff-port-212529 kubelet[3595]: E0127 14:40:42.317220    3595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-mhg9r_kubernetes-dashboard(aafa51e8-ca5f-4384-bd3c-99b233d03a07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-mhg9r" podUID="aafa51e8-ca5f-4384-bd3c-99b233d03a07"
	
	
	==> kubernetes-dashboard [8647d690e7fb16f89861e91a92bd76b683a9df34dd0b865a24732c90a102e50c] <==
	2025/01/27 14:28:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:29:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:29:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:30:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:30:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:31:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:31:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:32:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:32:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:33:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:33:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:34:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:34:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:35:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:35:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:36:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:36:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:37:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:37:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:38:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:38:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:39:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:39:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:40:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:40:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0a908e4ccd09a281400b9c61383c69446c845dc22a4ce83c963c519acd91dc3d] <==
	I0127 14:20:31.199540       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:20:31.229456       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:20:31.229740       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:20:31.248123       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:20:31.248649       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-212529_ea2b82a2-cd12-4d54-b983-1ff273d99c59!
	I0127 14:20:31.251623       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1bc8c475-dcef-40df-a9af-554ab3e62ee6", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-212529_ea2b82a2-cd12-4d54-b983-1ff273d99c59 became leader
	I0127 14:20:31.349769       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-212529_ea2b82a2-cd12-4d54-b983-1ff273d99c59!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-212529 -n default-k8s-diff-port-212529
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-212529 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-gpkgd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-212529 describe pod metrics-server-f79f97bbb-gpkgd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-212529 describe pod metrics-server-f79f97bbb-gpkgd: exit status 1 (57.592266ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-gpkgd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-212529 describe pod metrics-server-f79f97bbb-gpkgd: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1590.94s)

                                                
                                    

Test pass (275/316)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.61
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 11.85
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 81.64
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 205.88
29 TestAddons/serial/Volcano 41.8
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.47
35 TestAddons/parallel/Registry 15.54
36 TestAddons/parallel/Ingress 20.45
37 TestAddons/parallel/InspektorGadget 10.91
38 TestAddons/parallel/MetricsServer 6.13
40 TestAddons/parallel/CSI 55.87
41 TestAddons/parallel/Headlamp 19.73
42 TestAddons/parallel/CloudSpanner 6.53
43 TestAddons/parallel/LocalPath 54.38
44 TestAddons/parallel/NvidiaDevicePlugin 5.47
45 TestAddons/parallel/Yakd 11.9
47 TestAddons/StoppedEnableDisable 91.24
48 TestCertOptions 48.73
49 TestCertExpiration 276.73
51 TestForceSystemdFlag 69.87
52 TestForceSystemdEnv 90.16
54 TestKVMDriverInstallOrUpdate 6.19
58 TestErrorSpam/setup 42.88
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.47
62 TestErrorSpam/unpause 1.65
63 TestErrorSpam/stop 4.01
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 84.41
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 40.15
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
75 TestFunctional/serial/CacheCmd/cache/add_local 1.94
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.47
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 41.7
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.21
86 TestFunctional/serial/LogsFileCmd 1.28
87 TestFunctional/serial/InvalidService 3.9
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 14.12
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.75
97 TestFunctional/parallel/ServiceCmdConnect 8.45
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 48.37
101 TestFunctional/parallel/SSHCmd 0.39
102 TestFunctional/parallel/CpCmd 1.29
103 TestFunctional/parallel/MySQL 34.6
104 TestFunctional/parallel/FileSync 0.19
105 TestFunctional/parallel/CertSync 1.2
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
113 TestFunctional/parallel/License 0.56
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.18
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
116 TestFunctional/parallel/ProfileCmd/profile_list 0.34
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
118 TestFunctional/parallel/MountCmd/any-port 8.51
119 TestFunctional/parallel/MountCmd/specific-port 1.82
120 TestFunctional/parallel/ServiceCmd/List 0.42
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
123 TestFunctional/parallel/ServiceCmd/Format 0.41
124 TestFunctional/parallel/MountCmd/VerifyCleanup 1.35
125 TestFunctional/parallel/ServiceCmd/URL 0.35
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 0.43
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.44
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
132 TestFunctional/parallel/ImageCommands/ImageBuild 4.05
133 TestFunctional/parallel/ImageCommands/Setup 1.98
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.18
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.71
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 191.53
160 TestMultiControlPlane/serial/DeployApp 5.71
161 TestMultiControlPlane/serial/PingHostFromPods 1.15
162 TestMultiControlPlane/serial/AddWorkerNode 59.97
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
165 TestMultiControlPlane/serial/CopyFile 12.94
166 TestMultiControlPlane/serial/StopSecondaryNode 91.61
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
168 TestMultiControlPlane/serial/RestartSecondaryNode 41.91
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 450.99
171 TestMultiControlPlane/serial/DeleteSecondaryNode 6.71
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.6
173 TestMultiControlPlane/serial/StopCluster 272.72
174 TestMultiControlPlane/serial/RestartCluster 134.55
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.6
176 TestMultiControlPlane/serial/AddSecondaryNode 73.5
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
181 TestJSONOutput/start/Command 55.84
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.63
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.63
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.47
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 88.96
213 TestMountStart/serial/StartWithMountFirst 25.28
214 TestMountStart/serial/VerifyMountFirst 0.37
215 TestMountStart/serial/StartWithMountSecond 27.94
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.67
218 TestMountStart/serial/VerifyMountPostDelete 0.37
219 TestMountStart/serial/Stop 1.28
220 TestMountStart/serial/RestartStopped 23.93
221 TestMountStart/serial/VerifyMountPostStop 0.37
224 TestMultiNode/serial/FreshStart2Nodes 110.97
225 TestMultiNode/serial/DeployApp2Nodes 4.92
226 TestMultiNode/serial/PingHostFrom2Pods 0.75
227 TestMultiNode/serial/AddNode 60.27
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.57
230 TestMultiNode/serial/CopyFile 7.19
231 TestMultiNode/serial/StopNode 2.16
232 TestMultiNode/serial/StartAfterStop 34.23
233 TestMultiNode/serial/RestartKeepsNodes 308.47
234 TestMultiNode/serial/DeleteNode 2.16
235 TestMultiNode/serial/StopMultiNode 182.03
236 TestMultiNode/serial/RestartMultiNode 91.35
237 TestMultiNode/serial/ValidateNameConflict 42.74
242 TestPreload 233.45
244 TestScheduledStopUnix 111.28
248 TestRunningBinaryUpgrade 163.28
250 TestKubernetesUpgrade 186.75
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
254 TestNoKubernetes/serial/StartWithK8s 112.85
262 TestNetworkPlugins/group/false 3.28
266 TestNoKubernetes/serial/StartWithStopK8s 54.49
267 TestNoKubernetes/serial/Start 71.62
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
269 TestNoKubernetes/serial/ProfileList 18.67
270 TestNoKubernetes/serial/Stop 1.36
271 TestNoKubernetes/serial/StartNoArgs 33.3
272 TestStoppedBinaryUpgrade/Setup 2.27
273 TestStoppedBinaryUpgrade/Upgrade 130.86
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
283 TestPause/serial/Start 112.33
284 TestNetworkPlugins/group/auto/Start 98.79
285 TestNetworkPlugins/group/kindnet/Start 101.63
286 TestNetworkPlugins/group/auto/KubeletFlags 0.22
287 TestNetworkPlugins/group/auto/NetCatPod 10.29
288 TestPause/serial/SecondStartNoReconfiguration 46.66
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.74
290 TestNetworkPlugins/group/calico/Start 89.64
291 TestNetworkPlugins/group/auto/DNS 0.16
292 TestNetworkPlugins/group/auto/Localhost 0.13
293 TestNetworkPlugins/group/auto/HairPin 0.13
294 TestNetworkPlugins/group/custom-flannel/Start 91.47
295 TestPause/serial/Pause 0.71
296 TestPause/serial/VerifyStatus 0.25
297 TestPause/serial/Unpause 0.65
298 TestPause/serial/PauseAgain 0.77
299 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
300 TestPause/serial/DeletePaused 0.85
301 TestPause/serial/VerifyDeletedResources 0.64
302 TestNetworkPlugins/group/enable-default-cni/Start 94.46
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
304 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
305 TestNetworkPlugins/group/kindnet/DNS 0.18
306 TestNetworkPlugins/group/kindnet/Localhost 0.15
307 TestNetworkPlugins/group/kindnet/HairPin 0.18
308 TestNetworkPlugins/group/flannel/Start 80.92
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.23
311 TestNetworkPlugins/group/calico/NetCatPod 9.23
312 TestNetworkPlugins/group/calico/DNS 0.14
313 TestNetworkPlugins/group/calico/Localhost 0.11
314 TestNetworkPlugins/group/calico/HairPin 0.18
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.82
317 TestNetworkPlugins/group/bridge/Start 59.64
318 TestNetworkPlugins/group/custom-flannel/DNS 0.19
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
323 TestStartStop/group/old-k8s-version/serial/FirstStart 183.85
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.22
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
330 TestNetworkPlugins/group/flannel/NetCatPod 9.23
332 TestStartStop/group/no-preload/serial/FirstStart 110.03
333 TestNetworkPlugins/group/flannel/DNS 0.15
334 TestNetworkPlugins/group/flannel/Localhost 0.14
335 TestNetworkPlugins/group/flannel/HairPin 0.12
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
337 TestNetworkPlugins/group/bridge/NetCatPod 9.41
339 TestStartStop/group/embed-certs/serial/FirstStart 66.55
340 TestNetworkPlugins/group/bridge/DNS 0.13
341 TestNetworkPlugins/group/bridge/Localhost 0.11
342 TestNetworkPlugins/group/bridge/HairPin 0.11
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 109.34
345 TestStartStop/group/embed-certs/serial/DeployApp 10.3
346 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
347 TestStartStop/group/embed-certs/serial/Stop 90.81
348 TestStartStop/group/no-preload/serial/DeployApp 10.28
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
350 TestStartStop/group/no-preload/serial/Stop 91.34
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
352 TestStartStop/group/old-k8s-version/serial/DeployApp 10.4
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.28
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.91
356 TestStartStop/group/old-k8s-version/serial/Stop 91.36
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
364 TestStartStop/group/old-k8s-version/serial/SecondStart 179.29
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
368 TestStartStop/group/old-k8s-version/serial/Pause 2.5
370 TestStartStop/group/newest-cni/serial/FirstStart 51.01
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.22
373 TestStartStop/group/newest-cni/serial/Stop 7.42
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
375 TestStartStop/group/newest-cni/serial/SecondStart 38.29
376 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
377 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
379 TestStartStop/group/newest-cni/serial/Pause 2.36
x
+
TestDownloadOnly/v1.20.0/json-events (22.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-833953 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-833953 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (22.611340898s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 13:02:01.866656 1806070 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 13:02:01.866855 1806070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-833953
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-833953: exit status 85 (61.352609ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-833953 | jenkins | v1.35.0 | 27 Jan 25 13:01 UTC |          |
	|         | -p download-only-833953        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:01:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:01:39.300588 1806083 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:01:39.300722 1806083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:01:39.300733 1806083 out.go:358] Setting ErrFile to fd 2...
	I0127 13:01:39.300740 1806083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:01:39.300913 1806083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	W0127 13:01:39.301044 1806083 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20327-1798877/.minikube/config/config.json: open /home/jenkins/minikube-integration/20327-1798877/.minikube/config/config.json: no such file or directory
	I0127 13:01:39.301688 1806083 out.go:352] Setting JSON to true
	I0127 13:01:39.302705 1806083 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":35040,"bootTime":1737947859,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:01:39.302864 1806083 start.go:139] virtualization: kvm guest
	I0127 13:01:39.305103 1806083 out.go:97] [download-only-833953] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0127 13:01:39.305241 1806083 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 13:01:39.305289 1806083 notify.go:220] Checking for updates...
	I0127 13:01:39.306670 1806083 out.go:169] MINIKUBE_LOCATION=20327
	I0127 13:01:39.307996 1806083 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:01:39.309145 1806083 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 13:01:39.310323 1806083 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 13:01:39.311278 1806083 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 13:01:39.313361 1806083 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 13:01:39.313587 1806083 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:01:39.349346 1806083 out.go:97] Using the kvm2 driver based on user configuration
	I0127 13:01:39.349370 1806083 start.go:297] selected driver: kvm2
	I0127 13:01:39.349377 1806083 start.go:901] validating driver "kvm2" against <nil>
	I0127 13:01:39.349727 1806083 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:01:39.349822 1806083 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-1798877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:01:39.365224 1806083 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:01:39.365283 1806083 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:01:39.365803 1806083 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 13:01:39.365951 1806083 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 13:01:39.365981 1806083 cni.go:84] Creating CNI manager for ""
	I0127 13:01:39.366034 1806083 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:01:39.366044 1806083 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 13:01:39.366095 1806083 start.go:340] cluster config:
	{Name:download-only-833953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-833953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:01:39.366269 1806083 iso.go:125] acquiring lock: {Name:mk3326e4e64b9d95edc1453384276c21a2957c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:01:39.367854 1806083 out.go:97] Downloading VM boot image ...
	I0127 13:01:39.367885 1806083 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 13:01:48.761806 1806083 out.go:97] Starting "download-only-833953" primary control-plane node in "download-only-833953" cluster
	I0127 13:01:48.761852 1806083 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 13:01:48.860355 1806083 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0127 13:01:48.860383 1806083 cache.go:56] Caching tarball of preloaded images
	I0127 13:01:48.860554 1806083 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 13:01:48.862147 1806083 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 13:01:48.862174 1806083 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0127 13:01:48.957766 1806083 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-833953 host does not exist
	  To start a cluster, run: "minikube start -p download-only-833953"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-833953
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (11.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-314695 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-314695 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (11.850251784s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (11.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 13:02:14.047011 1806070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 13:02:14.047060 1806070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-314695
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-314695: exit status 85 (61.352664ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-833953 | jenkins | v1.35.0 | 27 Jan 25 13:01 UTC |                     |
	|         | -p download-only-833953        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 13:02 UTC | 27 Jan 25 13:02 UTC |
	| delete  | -p download-only-833953        | download-only-833953 | jenkins | v1.35.0 | 27 Jan 25 13:02 UTC | 27 Jan 25 13:02 UTC |
	| start   | -o=json --download-only        | download-only-314695 | jenkins | v1.35.0 | 27 Jan 25 13:02 UTC |                     |
	|         | -p download-only-314695        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:02:02
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:02:02.237871 1806337 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:02:02.237994 1806337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:02:02.238004 1806337 out.go:358] Setting ErrFile to fd 2...
	I0127 13:02:02.238008 1806337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:02:02.238203 1806337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 13:02:02.238819 1806337 out.go:352] Setting JSON to true
	I0127 13:02:02.239761 1806337 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":35063,"bootTime":1737947859,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:02:02.239876 1806337 start.go:139] virtualization: kvm guest
	I0127 13:02:02.241754 1806337 out.go:97] [download-only-314695] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:02:02.241892 1806337 notify.go:220] Checking for updates...
	I0127 13:02:02.243100 1806337 out.go:169] MINIKUBE_LOCATION=20327
	I0127 13:02:02.244157 1806337 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:02:02.245172 1806337 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 13:02:02.246255 1806337 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 13:02:02.247312 1806337 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 13:02:02.249138 1806337 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 13:02:02.249368 1806337 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:02:02.281347 1806337 out.go:97] Using the kvm2 driver based on user configuration
	I0127 13:02:02.281377 1806337 start.go:297] selected driver: kvm2
	I0127 13:02:02.281391 1806337 start.go:901] validating driver "kvm2" against <nil>
	I0127 13:02:02.281696 1806337 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:02:02.281767 1806337 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-1798877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:02:02.296969 1806337 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:02:02.297032 1806337 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:02:02.297568 1806337 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 13:02:02.297703 1806337 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 13:02:02.297732 1806337 cni.go:84] Creating CNI manager for ""
	I0127 13:02:02.297785 1806337 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 13:02:02.297794 1806337 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 13:02:02.297845 1806337 start.go:340] cluster config:
	{Name:download-only-314695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-314695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:02:02.297945 1806337 iso.go:125] acquiring lock: {Name:mk3326e4e64b9d95edc1453384276c21a2957c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:02:02.299194 1806337 out.go:97] Starting "download-only-314695" primary control-plane node in "download-only-314695" cluster
	I0127 13:02:02.299212 1806337 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:02:02.814040 1806337 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 13:02:02.814075 1806337 cache.go:56] Caching tarball of preloaded images
	I0127 13:02:02.814275 1806337 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:02:02.815878 1806337 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 13:02:02.815898 1806337 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 ...
	I0127 13:02:02.917407 1806337 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:8f020f9a34bd60feec225b8429b992a8 -> /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-314695 host does not exist
	  To start a cluster, run: "minikube start -p download-only-314695"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-314695
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 13:02:15.050125 1806070 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-380395 --alsologtostderr --binary-mirror http://127.0.0.1:44537 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-380395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-380395
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (81.64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-899878 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-899878 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m20.733639543s)
helpers_test.go:175: Cleaning up "offline-containerd-899878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-899878
--- PASS: TestOffline (81.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-547451
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-547451: exit status 85 (55.744218ms)

                                                
                                                
-- stdout --
	* Profile "addons-547451" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-547451"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-547451
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-547451: exit status 85 (54.745832ms)

                                                
                                                
-- stdout --
	* Profile "addons-547451" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-547451"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (205.88s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-547451 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-547451 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m25.88440596s)
--- PASS: TestAddons/Setup (205.88s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.8s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 19.71172ms
addons_test.go:807: volcano-scheduler stabilized in 19.76371ms
addons_test.go:815: volcano-admission stabilized in 19.858974ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-hlqxn" [46866825-b780-49ff-9cb7-b290673c17d2] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003820404s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-xspnp" [bb8d5851-7dbc-4418-bb6b-60ce16260f7c] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004605378s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-rspjh" [3f215ea1-8a66-4b35-891e-f9acb284d257] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004256017s
addons_test.go:842: (dbg) Run:  kubectl --context addons-547451 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-547451 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-547451 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d74495b1-edb6-4be9-9c8a-754824639451] Pending
helpers_test.go:344: "test-job-nginx-0" [d74495b1-edb6-4be9-9c8a-754824639451] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d74495b1-edb6-4be9-9c8a-754824639451] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.007963508s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-547451 addons disable volcano --alsologtostderr -v=1: (11.385086724s)
--- PASS: TestAddons/serial/Volcano (41.80s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-547451 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-547451 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-547451 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-547451 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [256aab67-575f-4f6d-8c43-db4914051eac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [256aab67-575f-4f6d-8c43-db4914051eac] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003884487s
addons_test.go:633: (dbg) Run:  kubectl --context addons-547451 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-547451 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-547451 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.522118ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-pgsjh" [02c07bb6-d862-44b8-a521-ed65da6ce49c] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003261094s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6vnmr" [4ad5459c-02e7-4156-82bf-0a4be09c4880] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008746539s
addons_test.go:331: (dbg) Run:  kubectl --context addons-547451 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-547451 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-547451 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.77992913s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 ip
2025/01/27 13:06:56 [DEBUG] GET http://192.168.39.240:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-547451 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-547451 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-547451 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2a78dc0e-7a88-4762-b47b-a87f34b4ebda] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2a78dc0e-7a88-4762-b47b-a87f34b4ebda] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003867756s
I0127 13:06:58.511574 1806070 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-547451 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.240
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-547451 addons disable ingress-dns --alsologtostderr -v=1: (1.359826467s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-547451 addons disable ingress --alsologtostderr -v=1: (7.811508916s)
--- PASS: TestAddons/parallel/Ingress (20.45s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2fdjj" [46a80be5-faa2-4815-b656-d24c06a3ff5f] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005478067s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-547451 addons disable inspektor-gadget --alsologtostderr -v=1: (5.901521695s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.13s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.110683ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-8zzz5" [0bf2dacd-c365-4ef3-8506-3e3c179f8d31] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004177479s
addons_test.go:402: (dbg) Run:  kubectl --context addons-547451 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-547451 addons disable metrics-server --alsologtostderr -v=1: (1.038893886s)
--- PASS: TestAddons/parallel/MetricsServer (6.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 13:06:52.763297 1806070 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 13:06:52.788554 1806070 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 13:06:52.788584 1806070 kapi.go:107] duration metric: took 25.30764ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 25.318497ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-547451 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-547451 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [eef41dff-85bf-41e0-9787-79f5cc228c76] Pending
helpers_test.go:344: "task-pv-pod" [eef41dff-85bf-41e0-9787-79f5cc228c76] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [eef41dff-85bf-41e0-9787-79f5cc228c76] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003614815s
addons_test.go:511: (dbg) Run:  kubectl --context addons-547451 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-547451 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-547451 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-547451 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-547451 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-547451 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-547451 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c048077d-2a23-473e-bf41-902031816cef] Pending
helpers_test.go:344: "task-pv-pod-restore" [c048077d-2a23-473e-bf41-902031816cef] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c048077d-2a23-473e-bf41-902031816cef] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004319387s
addons_test.go:553: (dbg) Run:  kubectl --context addons-547451 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-547451 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-547451 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-547451 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.646155184s)
--- PASS: TestAddons/parallel/CSI (55.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-547451 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-pwwwp" [b8393ddb-3e15-4dc6-beba-9a34b810807e] Pending
helpers_test.go:344: "headlamp-69d78d796f-pwwwp" [b8393ddb-3e15-4dc6-beba-9a34b810807e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-pwwwp" [b8393ddb-3e15-4dc6-beba-9a34b810807e] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003847765s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-547451 addons disable headlamp --alsologtostderr -v=1: (5.901385087s)
--- PASS: TestAddons/parallel/Headlamp (19.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-z8q29" [144d2687-c904-43ac-9849-be59b7ea8712] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004326427s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-547451 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-547451 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-547451 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cce6d01f-23c6-4336-a129-6baffa07e77f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cce6d01f-23c6-4336-a129-6baffa07e77f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cce6d01f-23c6-4336-a129-6baffa07e77f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00369806s
addons_test.go:906: (dbg) Run:  kubectl --context addons-547451 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 ssh "cat /opt/local-path-provisioner/pvc-fc3cac42-a7ca-409b-b9a6-eca120f21d97_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-547451 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-547451 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-547451 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.557829545s)
--- PASS: TestAddons/parallel/LocalPath (54.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9t9dc" [d5e969da-68a6-48ab-950e-73db8cdbdc18] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004779284s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-llccr" [a71829b5-72ec-4911-a8fe-17dec9e39744] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003621925s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-547451 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-547451 addons disable yakd --alsologtostderr -v=1: (5.890645645s)
--- PASS: TestAddons/parallel/Yakd (11.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-547451
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-547451: (1m30.950049659s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-547451
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-547451
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-547451
--- PASS: TestAddons/StoppedEnableDisable (91.24s)

                                                
                                    
x
+
TestCertOptions (48.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-714809 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
I0127 14:02:02.000144 1806070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 14:02:04.079050 1806070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 14:02:04.114182 1806070 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 14:02:04.114232 1806070 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 14:02:04.114330 1806070 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 14:02:04.114360 1806070 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4202392256/002/docker-machine-driver-kvm2
I0127 14:02:04.157177 1806070 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4202392256/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc000015ce0 gz:0xc000015ce8 tar:0xc000015c90 tar.bz2:0xc000015ca0 tar.gz:0xc000015cb0 tar.xz:0xc000015cc0 tar.zst:0xc000015cd0 tbz2:0xc000015ca0 tgz:0xc000015cb0 txz:0xc000015cc0 tzst:0xc000015cd0 xz:0xc000015d00 zip:0xc000015d10 zst:0xc000015d08] Getters:map[file:0xc0018dadc0 http:0xc002086a50 https:0xc002086aa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 14:02:04.157217 1806070 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4202392256/002/docker-machine-driver-kvm2
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-714809 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (47.489698891s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-714809 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-714809 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-714809 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-714809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-714809
--- PASS: TestCertOptions (48.73s)

                                                
                                    
x
+
TestCertExpiration (276.73s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-998838 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-998838 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m4.955732973s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-998838 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-998838 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (30.030910249s)
helpers_test.go:175: Cleaning up "cert-expiration-998838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-998838
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-998838: (1.745058926s)
--- PASS: TestCertExpiration (276.73s)

                                                
                                    
x
+
TestForceSystemdFlag (69.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-805469 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-805469 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m8.595560063s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-805469 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-805469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-805469
E0127 14:03:15.616875 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-805469: (1.062876089s)
--- PASS: TestForceSystemdFlag (69.87s)

                                                
                                    
x
+
TestForceSystemdEnv (90.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-542756 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-542756 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m28.863223818s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-542756 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-542756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-542756
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-542756: (1.081652324s)
--- PASS: TestForceSystemdEnv (90.16s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.19s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 14:01:59.698775 1806070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 14:01:59.698976 1806070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 14:01:59.733884 1806070 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 14:01:59.734222 1806070 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 14:01:59.734276 1806070 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4202392256/001/docker-machine-driver-kvm2
I0127 14:01:59.977101 1806070 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4202392256/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc000015ce0 gz:0xc000015ce8 tar:0xc000015c90 tar.bz2:0xc000015ca0 tar.gz:0xc000015cb0 tar.xz:0xc000015cc0 tar.zst:0xc000015cd0 tbz2:0xc000015ca0 tgz:0xc000015cb0 txz:0xc000015cc0 tzst:0xc000015cd0 xz:0xc000015d00 zip:0xc000015d10 zst:0xc000015d08] Getters:map[file:0xc000c04990 http:0xc0009b57c0 https:0xc0009b5810] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 14:01:59.977148 1806070 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4202392256/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (6.19s)

                                                
                                    
x
+
TestErrorSpam/setup (42.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-280202 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-280202 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-280202 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-280202 --driver=kvm2  --container-runtime=containerd: (42.883827429s)
--- PASS: TestErrorSpam/setup (42.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (4.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 stop: (1.417302516s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 stop: (1.111784772s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-280202 --log_dir /tmp/nospam-280202 stop: (1.484734365s)
--- PASS: TestErrorSpam/stop (4.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/test/nested/copy/1806070/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410576 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0127 13:10:41.618963 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:41.625336 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:41.636631 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:41.657979 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:41.699467 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:41.780934 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:41.942485 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:42.264286 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:42.906353 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:44.187878 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:46.749903 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:51.871483 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:11:02.113430 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:11:22.595150 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-410576 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m24.406042216s)
--- PASS: TestFunctional/serial/StartWithProxy (84.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 13:11:39.891410 1806070 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410576 --alsologtostderr -v=8
E0127 13:12:03.556801 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-410576 --alsologtostderr -v=8: (40.144167682s)
functional_test.go:663: soft start took 40.144819118s for "functional-410576" cluster.
I0127 13:12:20.035924 1806070 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (40.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-410576 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-410576 cache add registry.k8s.io/pause:3.1: (1.078848794s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-410576 cache add registry.k8s.io/pause:3.3: (1.169521086s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-410576 cache add registry.k8s.io/pause:latest: (1.032583143s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-410576 /tmp/TestFunctionalserialCacheCmdcacheadd_local612519773/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 cache add minikube-local-cache-test:functional-410576
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-410576 cache add minikube-local-cache-test:functional-410576: (1.642160638s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 cache delete minikube-local-cache-test:functional-410576
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-410576
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410576 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (213.559825ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 kubectl -- --context functional-410576 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-410576 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410576 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-410576 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.694822487s)
functional_test.go:761: restart took 41.694936791s for "functional-410576" cluster.
I0127 13:13:09.163146 1806070 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (41.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-410576 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-410576 logs: (1.208638773s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 logs --file /tmp/TestFunctionalserialLogsFileCmd3018109736/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-410576 logs --file /tmp/TestFunctionalserialLogsFileCmd3018109736/001/logs.txt: (1.279822377s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.9s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-410576 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-410576
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-410576: exit status 115 (288.976288ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.31:32358 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-410576 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410576 config get cpus: exit status 14 (76.58185ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410576 config get cpus: exit status 14 (51.745787ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-410576 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-410576 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1813299: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.12s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410576 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-410576 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (146.927941ms)

                                                
                                                
-- stdout --
	* [functional-410576] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:13:18.018835 1813163 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:13:18.018966 1813163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:18.018977 1813163 out.go:358] Setting ErrFile to fd 2...
	I0127 13:13:18.018983 1813163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:18.019154 1813163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 13:13:18.019742 1813163 out.go:352] Setting JSON to false
	I0127 13:13:18.020757 1813163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":35739,"bootTime":1737947859,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:13:18.020878 1813163 start.go:139] virtualization: kvm guest
	I0127 13:13:18.022585 1813163 out.go:177] * [functional-410576] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:13:18.023879 1813163 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 13:13:18.023920 1813163 notify.go:220] Checking for updates...
	I0127 13:13:18.025917 1813163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:13:18.027076 1813163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 13:13:18.028254 1813163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 13:13:18.029210 1813163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:13:18.030109 1813163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:13:18.031337 1813163 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:13:18.031729 1813163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:13:18.031779 1813163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:13:18.048230 1813163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45129
	I0127 13:13:18.048606 1813163 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:13:18.049170 1813163 main.go:141] libmachine: Using API Version  1
	I0127 13:13:18.049226 1813163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:13:18.049538 1813163 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:13:18.049728 1813163 main.go:141] libmachine: (functional-410576) Calling .DriverName
	I0127 13:13:18.049991 1813163 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:13:18.050298 1813163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:13:18.050335 1813163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:13:18.065956 1813163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0127 13:13:18.066422 1813163 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:13:18.066953 1813163 main.go:141] libmachine: Using API Version  1
	I0127 13:13:18.066982 1813163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:13:18.067329 1813163 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:13:18.067535 1813163 main.go:141] libmachine: (functional-410576) Calling .DriverName
	I0127 13:13:18.106858 1813163 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:13:18.107800 1813163 start.go:297] selected driver: kvm2
	I0127 13:13:18.107828 1813163 start.go:901] validating driver "kvm2" against &{Name:functional-410576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-410576 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:13:18.107961 1813163 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:13:18.109836 1813163 out.go:201] 
	W0127 13:13:18.110898 1813163 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 13:13:18.111988 1813163 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410576 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410576 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-410576 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (148.709197ms)

                                                
                                                
-- stdout --
	* [functional-410576] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:13:17.872785 1813113 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:13:17.872962 1813113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:17.872975 1813113 out.go:358] Setting ErrFile to fd 2...
	I0127 13:13:17.872982 1813113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:17.873326 1813113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 13:13:17.873875 1813113 out.go:352] Setting JSON to false
	I0127 13:13:17.875213 1813113 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":35739,"bootTime":1737947859,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:13:17.875359 1813113 start.go:139] virtualization: kvm guest
	I0127 13:13:17.877115 1813113 out.go:177] * [functional-410576] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 13:13:17.878518 1813113 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 13:13:17.878506 1813113 notify.go:220] Checking for updates...
	I0127 13:13:17.879897 1813113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:13:17.881062 1813113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 13:13:17.882285 1813113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 13:13:17.883457 1813113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:13:17.884475 1813113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:13:17.885850 1813113 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:13:17.886272 1813113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:13:17.886323 1813113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:13:17.903258 1813113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34837
	I0127 13:13:17.903754 1813113 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:13:17.904393 1813113 main.go:141] libmachine: Using API Version  1
	I0127 13:13:17.904451 1813113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:13:17.904874 1813113 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:13:17.905097 1813113 main.go:141] libmachine: (functional-410576) Calling .DriverName
	I0127 13:13:17.905425 1813113 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:13:17.905742 1813113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:13:17.905820 1813113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:13:17.922373 1813113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0127 13:13:17.922843 1813113 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:13:17.923318 1813113 main.go:141] libmachine: Using API Version  1
	I0127 13:13:17.923341 1813113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:13:17.923666 1813113 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:13:17.923870 1813113 main.go:141] libmachine: (functional-410576) Calling .DriverName
	I0127 13:13:17.957976 1813113 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 13:13:17.959140 1813113 start.go:297] selected driver: kvm2
	I0127 13:13:17.959164 1813113 start.go:901] validating driver "kvm2" against &{Name:functional-410576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-410576 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:13:17.959309 1813113 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:13:17.961525 1813113 out.go:201] 
	W0127 13:13:17.962577 1813113 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 13:13:17.963573 1813113 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-410576 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-410576 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-wl9sb" [7ad2be02-f98a-4769-a910-937fd21290c1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-wl9sb" [7ad2be02-f98a-4769-a910-937fd21290c1] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004789045s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.31:31454
functional_test.go:1675: http://192.168.39.31:31454: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-wl9sb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.31:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.31:31454
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d51b8f3f-5884-424c-a6ee-cb23ba63d323] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.190489294s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-410576 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-410576 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-410576 get pvc myclaim -o=json
I0127 13:13:22.026926 1806070 retry.go:31] will retry after 2.849559216s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:db5411b5-809b-44a4-b9e3-b8aca0f71641 ResourceVersion:743 Generation:0 CreationTimestamp:2025-01-27 13:13:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0009f4dc0 VolumeMode:0xc0009f4e00 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-410576 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-410576 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-410576 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [205ab010-9675-4dfb-bbbb-2e58edc56269] Pending
helpers_test.go:344: "sp-pod" [205ab010-9675-4dfb-bbbb-2e58edc56269] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [205ab010-9675-4dfb-bbbb-2e58edc56269] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.00474023s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-410576 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-410576 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-410576 delete -f testdata/storage-provisioner/pod.yaml: (1.713296163s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-410576 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8fce40a6-6a7e-4c7a-94e9-fb0092fa069f] Pending
helpers_test.go:344: "sp-pod" [8fce40a6-6a7e-4c7a-94e9-fb0092fa069f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8fce40a6-6a7e-4c7a-94e9-fb0092fa069f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004766994s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-410576 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh -n functional-410576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 cp functional-410576:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2841030247/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh -n functional-410576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh -n functional-410576 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-410576 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-n76hg" [fd03458c-d4aa-41b9-89d0-3ddfda001115] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-n76hg" [fd03458c-d4aa-41b9-89d0-3ddfda001115] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.00376096s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-410576 exec mysql-58ccfd96bb-n76hg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-410576 exec mysql-58ccfd96bb-n76hg -- mysql -ppassword -e "show databases;": exit status 1 (112.97633ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 13:14:03.107593 1806070 retry.go:31] will retry after 832.228175ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-410576 exec mysql-58ccfd96bb-n76hg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-410576 exec mysql-58ccfd96bb-n76hg -- mysql -ppassword -e "show databases;": exit status 1 (134.187537ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 13:14:04.075165 1806070 retry.go:31] will retry after 2.153100673s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-410576 exec mysql-58ccfd96bb-n76hg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-410576 exec mysql-58ccfd96bb-n76hg -- mysql -ppassword -e "show databases;": exit status 1 (115.179804ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 13:14:06.344786 1806070 retry.go:31] will retry after 1.958393547s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-410576 exec mysql-58ccfd96bb-n76hg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1806070/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo cat /etc/test/nested/copy/1806070/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1806070.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo cat /etc/ssl/certs/1806070.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1806070.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo cat /usr/share/ca-certificates/1806070.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/18060702.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo cat /etc/ssl/certs/18060702.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/18060702.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo cat /usr/share/ca-certificates/18060702.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-410576 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410576 ssh "sudo systemctl is-active docker": exit status 1 (196.803829ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410576 ssh "sudo systemctl is-active crio": exit status 1 (193.096533ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-410576 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-410576 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-blwf4" [3b989a77-76ee-4580-9a1d-000fc9506758] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-blwf4" [3b989a77-76ee-4580-9a1d-000fc9506758] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004566278s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "290.313904ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.324343ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "339.166074ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "63.035409ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdany-port580401993/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737983596905196872" to /tmp/TestFunctionalparallelMountCmdany-port580401993/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737983596905196872" to /tmp/TestFunctionalparallelMountCmdany-port580401993/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737983596905196872" to /tmp/TestFunctionalparallelMountCmdany-port580401993/001/test-1737983596905196872
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (228.600435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 13:13:17.134224 1806070 retry.go:31] will retry after 537.87364ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 13:13 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 13:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 13:13 test-1737983596905196872
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh cat /mount-9p/test-1737983596905196872
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-410576 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b59106e3-6e94-4146-951c-f57be119431d] Pending
helpers_test.go:344: "busybox-mount" [b59106e3-6e94-4146-951c-f57be119431d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b59106e3-6e94-4146-951c-f57be119431d] Running
helpers_test.go:344: "busybox-mount" [b59106e3-6e94-4146-951c-f57be119431d] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b59106e3-6e94-4146-951c-f57be119431d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005194624s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-410576 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo umount -f /mount-9p"
I0127 13:13:24.938050 1806070 retry.go:31] will retry after 1.692235727s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:db5411b5-809b-44a4-b9e3-b8aca0f71641 ResourceVersion:743 Generation:0 CreationTimestamp:2025-01-27 13:13:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c0f4d0 VolumeMode:0xc001c0f4f0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdany-port580401993/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdspecific-port923409286/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T /mount-9p | grep 9p"
E0127 13:13:25.478309 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (202.550867ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 13:13:25.617919 1806070 retry.go:31] will retry after 586.021428ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdspecific-port923409286/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410576 ssh "sudo umount -f /mount-9p": exit status 1 (208.119261ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-410576 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdspecific-port923409286/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 service list -o json
functional_test.go:1494: Took "435.245315ms" to run "out/minikube-linux-amd64 -p functional-410576 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.31:31934
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357077534/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357077534/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357077534/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T" /mount1: exit status 1 (259.438911ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 13:13:27.494010 1806070 retry.go:31] will retry after 428.029037ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-410576 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357077534/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357077534/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410576 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357077534/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.31:31934
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410576 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-410576
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-410576
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410576 image ls --format short --alsologtostderr:
I0127 13:13:44.761720 1815019 out.go:345] Setting OutFile to fd 1 ...
I0127 13:13:44.761860 1815019 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:13:44.761870 1815019 out.go:358] Setting ErrFile to fd 2...
I0127 13:13:44.761876 1815019 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:13:44.762094 1815019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
I0127 13:13:44.762728 1815019 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:13:44.762864 1815019 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:13:44.763246 1815019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:13:44.763314 1815019 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:13:44.778800 1815019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38201
I0127 13:13:44.779344 1815019 main.go:141] libmachine: () Calling .GetVersion
I0127 13:13:44.780060 1815019 main.go:141] libmachine: Using API Version  1
I0127 13:13:44.780090 1815019 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:13:44.780531 1815019 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:13:44.780780 1815019 main.go:141] libmachine: (functional-410576) Calling .GetState
I0127 13:13:44.782667 1815019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:13:44.782713 1815019 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:13:44.798269 1815019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35893
I0127 13:13:44.798844 1815019 main.go:141] libmachine: () Calling .GetVersion
I0127 13:13:44.799417 1815019 main.go:141] libmachine: Using API Version  1
I0127 13:13:44.799440 1815019 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:13:44.799814 1815019 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:13:44.799993 1815019 main.go:141] libmachine: (functional-410576) Calling .DriverName
I0127 13:13:44.800197 1815019 ssh_runner.go:195] Run: systemctl --version
I0127 13:13:44.800222 1815019 main.go:141] libmachine: (functional-410576) Calling .GetSSHHostname
I0127 13:13:44.803296 1815019 main.go:141] libmachine: (functional-410576) DBG | domain functional-410576 has defined MAC address 52:54:00:e0:3f:d2 in network mk-functional-410576
I0127 13:13:44.803710 1815019 main.go:141] libmachine: (functional-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:3f:d2", ip: ""} in network mk-functional-410576: {Iface:virbr1 ExpiryTime:2025-01-27 14:10:29 +0000 UTC Type:0 Mac:52:54:00:e0:3f:d2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-410576 Clientid:01:52:54:00:e0:3f:d2}
I0127 13:13:44.803732 1815019 main.go:141] libmachine: (functional-410576) DBG | domain functional-410576 has defined IP address 192.168.39.31 and MAC address 52:54:00:e0:3f:d2 in network mk-functional-410576
I0127 13:13:44.803976 1815019 main.go:141] libmachine: (functional-410576) Calling .GetSSHPort
I0127 13:13:44.804128 1815019 main.go:141] libmachine: (functional-410576) Calling .GetSSHKeyPath
I0127 13:13:44.804260 1815019 main.go:141] libmachine: (functional-410576) Calling .GetSSHUsername
I0127 13:13:44.804358 1815019 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/functional-410576/id_rsa Username:docker}
I0127 13:13:44.923173 1815019 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:13:45.044406 1815019 main.go:141] libmachine: Making call to close driver server
I0127 13:13:45.044423 1815019 main.go:141] libmachine: (functional-410576) Calling .Close
I0127 13:13:45.044658 1815019 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:13:45.044684 1815019 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:13:45.044693 1815019 main.go:141] libmachine: Making call to close driver server
I0127 13:13:45.044703 1815019 main.go:141] libmachine: (functional-410576) Calling .Close
I0127 13:13:45.044707 1815019 main.go:141] libmachine: (functional-410576) DBG | Closing plugin on server side
I0127 13:13:45.044946 1815019 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:13:45.044955 1815019 main.go:141] libmachine: (functional-410576) DBG | Closing plugin on server side
I0127 13:13:45.044975 1815019 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410576 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:2b0d65 | 20.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:019ee1 | 26.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/kicbase/echo-server               | functional-410576  | sha256:9056ab | 2.37MB |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e29f9c | 30.9MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:95c0bd | 28.7MB |
| docker.io/library/minikube-local-cache-test | functional-410576  | sha256:aa04c4 | 991B   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410576 image ls --format table --alsologtostderr:
I0127 13:13:45.402973 1815137 out.go:345] Setting OutFile to fd 1 ...
I0127 13:13:45.403256 1815137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:13:45.403269 1815137 out.go:358] Setting ErrFile to fd 2...
I0127 13:13:45.403276 1815137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:13:45.403567 1815137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
I0127 13:13:45.404394 1815137 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:13:45.404531 1815137 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:13:45.405081 1815137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:13:45.405141 1815137 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:13:45.421003 1815137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
I0127 13:13:45.421513 1815137 main.go:141] libmachine: () Calling .GetVersion
I0127 13:13:45.422138 1815137 main.go:141] libmachine: Using API Version  1
I0127 13:13:45.422160 1815137 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:13:45.423033 1815137 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:13:45.424036 1815137 main.go:141] libmachine: (functional-410576) Calling .GetState
I0127 13:13:45.426028 1815137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:13:45.426077 1815137 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:13:45.441125 1815137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
I0127 13:13:45.441510 1815137 main.go:141] libmachine: () Calling .GetVersion
I0127 13:13:45.441952 1815137 main.go:141] libmachine: Using API Version  1
I0127 13:13:45.441972 1815137 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:13:45.442373 1815137 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:13:45.442602 1815137 main.go:141] libmachine: (functional-410576) Calling .DriverName
I0127 13:13:45.442807 1815137 ssh_runner.go:195] Run: systemctl --version
I0127 13:13:45.442843 1815137 main.go:141] libmachine: (functional-410576) Calling .GetSSHHostname
I0127 13:13:45.445982 1815137 main.go:141] libmachine: (functional-410576) DBG | domain functional-410576 has defined MAC address 52:54:00:e0:3f:d2 in network mk-functional-410576
I0127 13:13:45.446467 1815137 main.go:141] libmachine: (functional-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:3f:d2", ip: ""} in network mk-functional-410576: {Iface:virbr1 ExpiryTime:2025-01-27 14:10:29 +0000 UTC Type:0 Mac:52:54:00:e0:3f:d2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-410576 Clientid:01:52:54:00:e0:3f:d2}
I0127 13:13:45.446510 1815137 main.go:141] libmachine: (functional-410576) DBG | domain functional-410576 has defined IP address 192.168.39.31 and MAC address 52:54:00:e0:3f:d2 in network mk-functional-410576
I0127 13:13:45.446681 1815137 main.go:141] libmachine: (functional-410576) Calling .GetSSHPort
I0127 13:13:45.446882 1815137 main.go:141] libmachine: (functional-410576) Calling .GetSSHKeyPath
I0127 13:13:45.447056 1815137 main.go:141] libmachine: (functional-410576) Calling .GetSSHUsername
I0127 13:13:45.447187 1815137 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/functional-410576/id_rsa Username:docker}
I0127 13:13:45.580213 1815137 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:13:45.644425 1815137 main.go:141] libmachine: Making call to close driver server
I0127 13:13:45.644442 1815137 main.go:141] libmachine: (functional-410576) Calling .Close
I0127 13:13:45.644792 1815137 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:13:45.644813 1815137 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:13:45.644822 1815137 main.go:141] libmachine: Making call to close driver server
I0127 13:13:45.644830 1815137 main.go:141] libmachine: (functional-410576) Calling .Close
I0127 13:13:45.645094 1815137 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:13:45.645111 1815137 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410576 image ls --format json --alsologtostderr:
[{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["
registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-410576"],"size":"2372971"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storag
e-provisioner:v5"],"size":"9058936"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"26258470"},{"id":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"30908485"},{"id":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7f
ff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"20657536"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:aa04c45c1bb87ed58b63f4ccace8cf6efa2cb7320ac7fe88ca6e233e422e62a8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-410576"],"size":"991"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry
.k8s.io/kube-apiserver:v1.32.1"],"size":"28671624"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410576 image ls --format json --alsologtostderr:
I0127 13:13:45.102541 1815071 out.go:345] Setting OutFile to fd 1 ...
I0127 13:13:45.102807 1815071 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:13:45.102819 1815071 out.go:358] Setting ErrFile to fd 2...
I0127 13:13:45.102826 1815071 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:13:45.103005 1815071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
I0127 13:13:45.103647 1815071 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:13:45.103794 1815071 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:13:45.104185 1815071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:13:45.104250 1815071 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:13:45.119999 1815071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
I0127 13:13:45.120473 1815071 main.go:141] libmachine: () Calling .GetVersion
I0127 13:13:45.121123 1815071 main.go:141] libmachine: Using API Version  1
I0127 13:13:45.121148 1815071 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:13:45.121468 1815071 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:13:45.121627 1815071 main.go:141] libmachine: (functional-410576) Calling .GetState
I0127 13:13:45.123442 1815071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:13:45.123492 1815071 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:13:45.138372 1815071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
I0127 13:13:45.138761 1815071 main.go:141] libmachine: () Calling .GetVersion
I0127 13:13:45.139419 1815071 main.go:141] libmachine: Using API Version  1
I0127 13:13:45.139442 1815071 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:13:45.139790 1815071 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:13:45.140015 1815071 main.go:141] libmachine: (functional-410576) Calling .DriverName
I0127 13:13:45.140228 1815071 ssh_runner.go:195] Run: systemctl --version
I0127 13:13:45.140260 1815071 main.go:141] libmachine: (functional-410576) Calling .GetSSHHostname
I0127 13:13:45.143436 1815071 main.go:141] libmachine: (functional-410576) DBG | domain functional-410576 has defined MAC address 52:54:00:e0:3f:d2 in network mk-functional-410576
I0127 13:13:45.143949 1815071 main.go:141] libmachine: (functional-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:3f:d2", ip: ""} in network mk-functional-410576: {Iface:virbr1 ExpiryTime:2025-01-27 14:10:29 +0000 UTC Type:0 Mac:52:54:00:e0:3f:d2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-410576 Clientid:01:52:54:00:e0:3f:d2}
I0127 13:13:45.143976 1815071 main.go:141] libmachine: (functional-410576) DBG | domain functional-410576 has defined IP address 192.168.39.31 and MAC address 52:54:00:e0:3f:d2 in network mk-functional-410576
I0127 13:13:45.144194 1815071 main.go:141] libmachine: (functional-410576) Calling .GetSSHPort
I0127 13:13:45.144352 1815071 main.go:141] libmachine: (functional-410576) Calling .GetSSHKeyPath
I0127 13:13:45.144590 1815071 main.go:141] libmachine: (functional-410576) Calling .GetSSHUsername
I0127 13:13:45.144730 1815071 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/functional-410576/id_rsa Username:docker}
I0127 13:13:45.225232 1815071 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:13:45.277589 1815071 main.go:141] libmachine: Making call to close driver server
I0127 13:13:45.277608 1815071 main.go:141] libmachine: (functional-410576) Calling .Close
I0127 13:13:45.277945 1815071 main.go:141] libmachine: (functional-410576) DBG | Closing plugin on server side
I0127 13:13:45.277999 1815071 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:13:45.278008 1815071 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:13:45.278022 1815071 main.go:141] libmachine: Making call to close driver server
I0127 13:13:45.278033 1815071 main.go:141] libmachine: (functional-410576) Calling .Close
I0127 13:13:45.278326 1815071 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:13:45.278345 1815071 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410576 image ls --format yaml --alsologtostderr:
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-410576
size: "2372971"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "28671624"
- id: sha256:aa04c45c1bb87ed58b63f4ccace8cf6efa2cb7320ac7fe88ca6e233e422e62a8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-410576
size: "991"
- id: sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "26258470"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "30908485"
- id: sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "20657536"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410576 image ls --format yaml --alsologtostderr:
I0127 13:13:44.768189 1815018 out.go:345] Setting OutFile to fd 1 ...
I0127 13:13:44.768333 1815018 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:13:44.768345 1815018 out.go:358] Setting ErrFile to fd 2...
I0127 13:13:44.768352 1815018 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:13:44.768528 1815018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
I0127 13:13:44.769131 1815018 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:13:44.769268 1815018 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:13:44.769628 1815018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:13:44.769688 1815018 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:13:44.785288 1815018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
I0127 13:13:44.785699 1815018 main.go:141] libmachine: () Calling .GetVersion
I0127 13:13:44.786362 1815018 main.go:141] libmachine: Using API Version  1
I0127 13:13:44.786400 1815018 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:13:44.786764 1815018 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:13:44.786984 1815018 main.go:141] libmachine: (functional-410576) Calling .GetState
I0127 13:13:44.788739 1815018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:13:44.788813 1815018 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:13:44.803731 1815018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
I0127 13:13:44.804063 1815018 main.go:141] libmachine: () Calling .GetVersion
I0127 13:13:44.804739 1815018 main.go:141] libmachine: Using API Version  1
I0127 13:13:44.804771 1815018 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:13:44.805309 1815018 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:13:44.805496 1815018 main.go:141] libmachine: (functional-410576) Calling .DriverName
I0127 13:13:44.805705 1815018 ssh_runner.go:195] Run: systemctl --version
I0127 13:13:44.805737 1815018 main.go:141] libmachine: (functional-410576) Calling .GetSSHHostname
I0127 13:13:44.808688 1815018 main.go:141] libmachine: (functional-410576) DBG | domain functional-410576 has defined MAC address 52:54:00:e0:3f:d2 in network mk-functional-410576
I0127 13:13:44.809119 1815018 main.go:141] libmachine: (functional-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:3f:d2", ip: ""} in network mk-functional-410576: {Iface:virbr1 ExpiryTime:2025-01-27 14:10:29 +0000 UTC Type:0 Mac:52:54:00:e0:3f:d2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-410576 Clientid:01:52:54:00:e0:3f:d2}
I0127 13:13:44.809161 1815018 main.go:141] libmachine: (functional-410576) DBG | domain functional-410576 has defined IP address 192.168.39.31 and MAC address 52:54:00:e0:3f:d2 in network mk-functional-410576
I0127 13:13:44.809303 1815018 main.go:141] libmachine: (functional-410576) Calling .GetSSHPort
I0127 13:13:44.809471 1815018 main.go:141] libmachine: (functional-410576) Calling .GetSSHKeyPath
I0127 13:13:44.809636 1815018 main.go:141] libmachine: (functional-410576) Calling .GetSSHUsername
I0127 13:13:44.809793 1815018 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/functional-410576/id_rsa Username:docker}
I0127 13:13:44.923411 1815018 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:13:45.035460 1815018 main.go:141] libmachine: Making call to close driver server
I0127 13:13:45.035477 1815018 main.go:141] libmachine: (functional-410576) Calling .Close
I0127 13:13:45.035784 1815018 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:13:45.035812 1815018 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:13:45.035821 1815018 main.go:141] libmachine: Making call to close driver server
I0127 13:13:45.035844 1815018 main.go:141] libmachine: (functional-410576) Calling .Close
I0127 13:13:45.035847 1815018 main.go:141] libmachine: (functional-410576) DBG | Closing plugin on server side
I0127 13:13:45.036085 1815018 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:13:45.036119 1815018 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:13:45.036117 1815018 main.go:141] libmachine: (functional-410576) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410576 ssh pgrep buildkitd: exit status 1 (232.154747ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image build -t localhost/my-image:functional-410576 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-410576 image build -t localhost/my-image:functional-410576 testdata/build --alsologtostderr: (3.612980398s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410576 image build -t localhost/my-image:functional-410576 testdata/build --alsologtostderr:
I0127 13:13:45.321610 1815118 out.go:345] Setting OutFile to fd 1 ...
I0127 13:13:45.321899 1815118 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:13:45.321911 1815118 out.go:358] Setting ErrFile to fd 2...
I0127 13:13:45.321916 1815118 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:13:45.322083 1815118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
I0127 13:13:45.322657 1815118 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:13:45.323454 1815118 config.go:182] Loaded profile config "functional-410576": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:13:45.323859 1815118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:13:45.323905 1815118 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:13:45.339396 1815118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38033
I0127 13:13:45.339867 1815118 main.go:141] libmachine: () Calling .GetVersion
I0127 13:13:45.340409 1815118 main.go:141] libmachine: Using API Version  1
I0127 13:13:45.340425 1815118 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:13:45.340780 1815118 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:13:45.340965 1815118 main.go:141] libmachine: (functional-410576) Calling .GetState
I0127 13:13:45.342848 1815118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:13:45.342906 1815118 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:13:45.360531 1815118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44117
I0127 13:13:45.361009 1815118 main.go:141] libmachine: () Calling .GetVersion
I0127 13:13:45.361486 1815118 main.go:141] libmachine: Using API Version  1
I0127 13:13:45.361507 1815118 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:13:45.361931 1815118 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:13:45.362151 1815118 main.go:141] libmachine: (functional-410576) Calling .DriverName
I0127 13:13:45.362363 1815118 ssh_runner.go:195] Run: systemctl --version
I0127 13:13:45.362402 1815118 main.go:141] libmachine: (functional-410576) Calling .GetSSHHostname
I0127 13:13:45.365345 1815118 main.go:141] libmachine: (functional-410576) DBG | domain functional-410576 has defined MAC address 52:54:00:e0:3f:d2 in network mk-functional-410576
I0127 13:13:45.365732 1815118 main.go:141] libmachine: (functional-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:3f:d2", ip: ""} in network mk-functional-410576: {Iface:virbr1 ExpiryTime:2025-01-27 14:10:29 +0000 UTC Type:0 Mac:52:54:00:e0:3f:d2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:functional-410576 Clientid:01:52:54:00:e0:3f:d2}
I0127 13:13:45.365771 1815118 main.go:141] libmachine: (functional-410576) DBG | domain functional-410576 has defined IP address 192.168.39.31 and MAC address 52:54:00:e0:3f:d2 in network mk-functional-410576
I0127 13:13:45.365848 1815118 main.go:141] libmachine: (functional-410576) Calling .GetSSHPort
I0127 13:13:45.366017 1815118 main.go:141] libmachine: (functional-410576) Calling .GetSSHKeyPath
I0127 13:13:45.366129 1815118 main.go:141] libmachine: (functional-410576) Calling .GetSSHUsername
I0127 13:13:45.366232 1815118 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/functional-410576/id_rsa Username:docker}
I0127 13:13:45.449371 1815118 build_images.go:161] Building image from path: /tmp/build.1852184681.tar
I0127 13:13:45.449428 1815118 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 13:13:45.467886 1815118 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1852184681.tar
I0127 13:13:45.481486 1815118 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1852184681.tar: stat -c "%s %y" /var/lib/minikube/build/build.1852184681.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1852184681.tar': No such file or directory
I0127 13:13:45.481513 1815118 ssh_runner.go:362] scp /tmp/build.1852184681.tar --> /var/lib/minikube/build/build.1852184681.tar (3072 bytes)
I0127 13:13:45.513068 1815118 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1852184681
I0127 13:13:45.521876 1815118 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1852184681 -xf /var/lib/minikube/build/build.1852184681.tar
I0127 13:13:45.530834 1815118 containerd.go:394] Building image: /var/lib/minikube/build/build.1852184681
I0127 13:13:45.530940 1815118 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1852184681 --local dockerfile=/var/lib/minikube/build/build.1852184681 --output type=image,name=localhost/my-image:functional-410576
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:0f90d96a03539497fb07aa305ec9e098924969cd18ffde44a2e4508e3e06eee5
#8 exporting manifest sha256:0f90d96a03539497fb07aa305ec9e098924969cd18ffde44a2e4508e3e06eee5 0.0s done
#8 exporting config sha256:a808746e948dcaff5b1bc9d52d05f53d2af4ab201e33a080e6a8a01957aaca75 0.0s done
#8 naming to localhost/my-image:functional-410576 done
#8 DONE 0.2s
I0127 13:13:48.857412 1815118 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1852184681 --local dockerfile=/var/lib/minikube/build/build.1852184681 --output type=image,name=localhost/my-image:functional-410576: (3.326437843s)
I0127 13:13:48.857505 1815118 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1852184681
I0127 13:13:48.871320 1815118 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1852184681.tar
I0127 13:13:48.880980 1815118 build_images.go:217] Built localhost/my-image:functional-410576 from /tmp/build.1852184681.tar
I0127 13:13:48.881027 1815118 build_images.go:133] succeeded building to: functional-410576
I0127 13:13:48.881032 1815118 build_images.go:134] failed building to: 
I0127 13:13:48.881064 1815118 main.go:141] libmachine: Making call to close driver server
I0127 13:13:48.881082 1815118 main.go:141] libmachine: (functional-410576) Calling .Close
I0127 13:13:48.881375 1815118 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:13:48.881399 1815118 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:13:48.881410 1815118 main.go:141] libmachine: Making call to close driver server
I0127 13:13:48.881419 1815118 main.go:141] libmachine: (functional-410576) Calling .Close
I0127 13:13:48.881651 1815118 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:13:48.881668 1815118 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.957006568s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-410576
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image load --daemon kicbase/echo-server:functional-410576 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image load --daemon kicbase/echo-server:functional-410576 --alsologtostderr
2025/01/27 13:13:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (8.315159683s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-410576
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image load --daemon kicbase/echo-server:functional-410576 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-410576 image load --daemon kicbase/echo-server:functional-410576 --alsologtostderr: (1.163166984s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image save kicbase/echo-server:functional-410576 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image rm kicbase/echo-server:functional-410576 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-410576
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-410576 image save --daemon kicbase/echo-server:functional-410576 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-410576
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-410576
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-410576
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-410576
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (191.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-021126 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 13:15:41.623896 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:16:09.320049 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-021126 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m10.878820038s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (191.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-021126 -- rollout status deployment/busybox: (3.420648336s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-4z5q4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-8czrk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-mq7p2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-4z5q4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-8czrk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-mq7p2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-4z5q4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-8czrk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-mq7p2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-4z5q4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-4z5q4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-8czrk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-8czrk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-mq7p2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-021126 -- exec busybox-58667487b6-mq7p2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-021126 -v=7 --alsologtostderr
E0127 13:18:15.616825 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:15.623229 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:15.634585 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:15.655999 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:15.697495 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:15.779200 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:15.940771 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:16.262121 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:16.903609 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:18.185899 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:20.747757 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:25.869538 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-021126 -v=7 --alsologtostderr: (59.121472216s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-021126 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp testdata/cp-test.txt ha-021126:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4140754194/001/cp-test_ha-021126.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126:/home/docker/cp-test.txt ha-021126-m02:/home/docker/cp-test_ha-021126_ha-021126-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m02 "sudo cat /home/docker/cp-test_ha-021126_ha-021126-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126:/home/docker/cp-test.txt ha-021126-m03:/home/docker/cp-test_ha-021126_ha-021126-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m03 "sudo cat /home/docker/cp-test_ha-021126_ha-021126-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126:/home/docker/cp-test.txt ha-021126-m04:/home/docker/cp-test_ha-021126_ha-021126-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m04 "sudo cat /home/docker/cp-test_ha-021126_ha-021126-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp testdata/cp-test.txt ha-021126-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4140754194/001/cp-test_ha-021126-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m02:/home/docker/cp-test.txt ha-021126:/home/docker/cp-test_ha-021126-m02_ha-021126.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126 "sudo cat /home/docker/cp-test_ha-021126-m02_ha-021126.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m02:/home/docker/cp-test.txt ha-021126-m03:/home/docker/cp-test_ha-021126-m02_ha-021126-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m03 "sudo cat /home/docker/cp-test_ha-021126-m02_ha-021126-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m02:/home/docker/cp-test.txt ha-021126-m04:/home/docker/cp-test_ha-021126-m02_ha-021126-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m04 "sudo cat /home/docker/cp-test_ha-021126-m02_ha-021126-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp testdata/cp-test.txt ha-021126-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m03 "sudo cat /home/docker/cp-test.txt"
E0127 13:18:36.110924 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4140754194/001/cp-test_ha-021126-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m03:/home/docker/cp-test.txt ha-021126:/home/docker/cp-test_ha-021126-m03_ha-021126.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126 "sudo cat /home/docker/cp-test_ha-021126-m03_ha-021126.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m03:/home/docker/cp-test.txt ha-021126-m02:/home/docker/cp-test_ha-021126-m03_ha-021126-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m02 "sudo cat /home/docker/cp-test_ha-021126-m03_ha-021126-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m03:/home/docker/cp-test.txt ha-021126-m04:/home/docker/cp-test_ha-021126-m03_ha-021126-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m04 "sudo cat /home/docker/cp-test_ha-021126-m03_ha-021126-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp testdata/cp-test.txt ha-021126-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4140754194/001/cp-test_ha-021126-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m04:/home/docker/cp-test.txt ha-021126:/home/docker/cp-test_ha-021126-m04_ha-021126.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126 "sudo cat /home/docker/cp-test_ha-021126-m04_ha-021126.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m04:/home/docker/cp-test.txt ha-021126-m02:/home/docker/cp-test_ha-021126-m04_ha-021126-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m02 "sudo cat /home/docker/cp-test_ha-021126-m04_ha-021126-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 cp ha-021126-m04:/home/docker/cp-test.txt ha-021126-m03:/home/docker/cp-test_ha-021126-m04_ha-021126-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 ssh -n ha-021126-m03 "sudo cat /home/docker/cp-test_ha-021126-m04_ha-021126-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 node stop m02 -v=7 --alsologtostderr
E0127 13:18:56.592422 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:19:37.554676 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-021126 node stop m02 -v=7 --alsologtostderr: (1m30.974994623s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-021126 status -v=7 --alsologtostderr: exit status 7 (632.947877ms)

                                                
                                                
-- stdout --
	ha-021126
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-021126-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-021126-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-021126-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:20:12.776965 1820289 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:20:12.777063 1820289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:20:12.777075 1820289 out.go:358] Setting ErrFile to fd 2...
	I0127 13:20:12.777080 1820289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:20:12.777307 1820289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 13:20:12.777539 1820289 out.go:352] Setting JSON to false
	I0127 13:20:12.777572 1820289 mustload.go:65] Loading cluster: ha-021126
	I0127 13:20:12.777602 1820289 notify.go:220] Checking for updates...
	I0127 13:20:12.777977 1820289 config.go:182] Loaded profile config "ha-021126": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:20:12.778000 1820289 status.go:174] checking status of ha-021126 ...
	I0127 13:20:12.778494 1820289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:20:12.778545 1820289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:12.796631 1820289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0127 13:20:12.797090 1820289 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:12.797647 1820289 main.go:141] libmachine: Using API Version  1
	I0127 13:20:12.797674 1820289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:12.798060 1820289 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:12.798253 1820289 main.go:141] libmachine: (ha-021126) Calling .GetState
	I0127 13:20:12.799829 1820289 status.go:371] ha-021126 host status = "Running" (err=<nil>)
	I0127 13:20:12.799848 1820289 host.go:66] Checking if "ha-021126" exists ...
	I0127 13:20:12.800146 1820289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:20:12.800188 1820289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:12.815149 1820289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40985
	I0127 13:20:12.815560 1820289 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:12.815981 1820289 main.go:141] libmachine: Using API Version  1
	I0127 13:20:12.816003 1820289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:12.816369 1820289 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:12.816580 1820289 main.go:141] libmachine: (ha-021126) Calling .GetIP
	I0127 13:20:12.819472 1820289 main.go:141] libmachine: (ha-021126) DBG | domain ha-021126 has defined MAC address 52:54:00:e6:11:09 in network mk-ha-021126
	I0127 13:20:12.819892 1820289 main.go:141] libmachine: (ha-021126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:11:09", ip: ""} in network mk-ha-021126: {Iface:virbr1 ExpiryTime:2025-01-27 14:14:23 +0000 UTC Type:0 Mac:52:54:00:e6:11:09 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-021126 Clientid:01:52:54:00:e6:11:09}
	I0127 13:20:12.819920 1820289 main.go:141] libmachine: (ha-021126) DBG | domain ha-021126 has defined IP address 192.168.39.102 and MAC address 52:54:00:e6:11:09 in network mk-ha-021126
	I0127 13:20:12.820034 1820289 host.go:66] Checking if "ha-021126" exists ...
	I0127 13:20:12.820330 1820289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:20:12.820370 1820289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:12.835244 1820289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44605
	I0127 13:20:12.835597 1820289 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:12.836088 1820289 main.go:141] libmachine: Using API Version  1
	I0127 13:20:12.836123 1820289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:12.836401 1820289 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:12.836591 1820289 main.go:141] libmachine: (ha-021126) Calling .DriverName
	I0127 13:20:12.836746 1820289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:20:12.836768 1820289 main.go:141] libmachine: (ha-021126) Calling .GetSSHHostname
	I0127 13:20:12.839149 1820289 main.go:141] libmachine: (ha-021126) DBG | domain ha-021126 has defined MAC address 52:54:00:e6:11:09 in network mk-ha-021126
	I0127 13:20:12.839603 1820289 main.go:141] libmachine: (ha-021126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:11:09", ip: ""} in network mk-ha-021126: {Iface:virbr1 ExpiryTime:2025-01-27 14:14:23 +0000 UTC Type:0 Mac:52:54:00:e6:11:09 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-021126 Clientid:01:52:54:00:e6:11:09}
	I0127 13:20:12.839640 1820289 main.go:141] libmachine: (ha-021126) DBG | domain ha-021126 has defined IP address 192.168.39.102 and MAC address 52:54:00:e6:11:09 in network mk-ha-021126
	I0127 13:20:12.839740 1820289 main.go:141] libmachine: (ha-021126) Calling .GetSSHPort
	I0127 13:20:12.839969 1820289 main.go:141] libmachine: (ha-021126) Calling .GetSSHKeyPath
	I0127 13:20:12.840113 1820289 main.go:141] libmachine: (ha-021126) Calling .GetSSHUsername
	I0127 13:20:12.840260 1820289 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/ha-021126/id_rsa Username:docker}
	I0127 13:20:12.927253 1820289 ssh_runner.go:195] Run: systemctl --version
	I0127 13:20:12.934073 1820289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:20:12.950200 1820289 kubeconfig.go:125] found "ha-021126" server: "https://192.168.39.254:8443"
	I0127 13:20:12.950251 1820289 api_server.go:166] Checking apiserver status ...
	I0127 13:20:12.950287 1820289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:20:12.963797 1820289 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1127/cgroup
	W0127 13:20:12.972408 1820289 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1127/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:20:12.972461 1820289 ssh_runner.go:195] Run: ls
	I0127 13:20:12.976990 1820289 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 13:20:12.983947 1820289 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 13:20:12.983971 1820289 status.go:463] ha-021126 apiserver status = Running (err=<nil>)
	I0127 13:20:12.983981 1820289 status.go:176] ha-021126 status: &{Name:ha-021126 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:20:12.984001 1820289 status.go:174] checking status of ha-021126-m02 ...
	I0127 13:20:12.984296 1820289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:20:12.984330 1820289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:12.999603 1820289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43933
	I0127 13:20:13.000118 1820289 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:13.000632 1820289 main.go:141] libmachine: Using API Version  1
	I0127 13:20:13.000654 1820289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:13.000954 1820289 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:13.001132 1820289 main.go:141] libmachine: (ha-021126-m02) Calling .GetState
	I0127 13:20:13.002593 1820289 status.go:371] ha-021126-m02 host status = "Stopped" (err=<nil>)
	I0127 13:20:13.002609 1820289 status.go:384] host is not running, skipping remaining checks
	I0127 13:20:13.002616 1820289 status.go:176] ha-021126-m02 status: &{Name:ha-021126-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:20:13.002637 1820289 status.go:174] checking status of ha-021126-m03 ...
	I0127 13:20:13.002982 1820289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:20:13.003020 1820289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:13.018283 1820289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0127 13:20:13.018731 1820289 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:13.019208 1820289 main.go:141] libmachine: Using API Version  1
	I0127 13:20:13.019232 1820289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:13.019544 1820289 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:13.019724 1820289 main.go:141] libmachine: (ha-021126-m03) Calling .GetState
	I0127 13:20:13.021214 1820289 status.go:371] ha-021126-m03 host status = "Running" (err=<nil>)
	I0127 13:20:13.021232 1820289 host.go:66] Checking if "ha-021126-m03" exists ...
	I0127 13:20:13.021523 1820289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:20:13.021555 1820289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:13.036982 1820289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37337
	I0127 13:20:13.037352 1820289 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:13.037827 1820289 main.go:141] libmachine: Using API Version  1
	I0127 13:20:13.037850 1820289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:13.038114 1820289 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:13.038325 1820289 main.go:141] libmachine: (ha-021126-m03) Calling .GetIP
	I0127 13:20:13.040916 1820289 main.go:141] libmachine: (ha-021126-m03) DBG | domain ha-021126-m03 has defined MAC address 52:54:00:b0:4a:2c in network mk-ha-021126
	I0127 13:20:13.041360 1820289 main.go:141] libmachine: (ha-021126-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4a:2c", ip: ""} in network mk-ha-021126: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:21 +0000 UTC Type:0 Mac:52:54:00:b0:4a:2c Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-021126-m03 Clientid:01:52:54:00:b0:4a:2c}
	I0127 13:20:13.041400 1820289 main.go:141] libmachine: (ha-021126-m03) DBG | domain ha-021126-m03 has defined IP address 192.168.39.230 and MAC address 52:54:00:b0:4a:2c in network mk-ha-021126
	I0127 13:20:13.041445 1820289 host.go:66] Checking if "ha-021126-m03" exists ...
	I0127 13:20:13.041728 1820289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:20:13.041766 1820289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:13.057434 1820289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I0127 13:20:13.057804 1820289 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:13.058276 1820289 main.go:141] libmachine: Using API Version  1
	I0127 13:20:13.058300 1820289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:13.058582 1820289 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:13.058788 1820289 main.go:141] libmachine: (ha-021126-m03) Calling .DriverName
	I0127 13:20:13.059008 1820289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:20:13.059031 1820289 main.go:141] libmachine: (ha-021126-m03) Calling .GetSSHHostname
	I0127 13:20:13.061993 1820289 main.go:141] libmachine: (ha-021126-m03) DBG | domain ha-021126-m03 has defined MAC address 52:54:00:b0:4a:2c in network mk-ha-021126
	I0127 13:20:13.062541 1820289 main.go:141] libmachine: (ha-021126-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4a:2c", ip: ""} in network mk-ha-021126: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:21 +0000 UTC Type:0 Mac:52:54:00:b0:4a:2c Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-021126-m03 Clientid:01:52:54:00:b0:4a:2c}
	I0127 13:20:13.062572 1820289 main.go:141] libmachine: (ha-021126-m03) DBG | domain ha-021126-m03 has defined IP address 192.168.39.230 and MAC address 52:54:00:b0:4a:2c in network mk-ha-021126
	I0127 13:20:13.062728 1820289 main.go:141] libmachine: (ha-021126-m03) Calling .GetSSHPort
	I0127 13:20:13.062966 1820289 main.go:141] libmachine: (ha-021126-m03) Calling .GetSSHKeyPath
	I0127 13:20:13.063132 1820289 main.go:141] libmachine: (ha-021126-m03) Calling .GetSSHUsername
	I0127 13:20:13.063342 1820289 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/ha-021126-m03/id_rsa Username:docker}
	I0127 13:20:13.147133 1820289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:20:13.166558 1820289 kubeconfig.go:125] found "ha-021126" server: "https://192.168.39.254:8443"
	I0127 13:20:13.166584 1820289 api_server.go:166] Checking apiserver status ...
	I0127 13:20:13.166611 1820289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:20:13.181877 1820289 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1132/cgroup
	W0127 13:20:13.191878 1820289 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1132/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:20:13.191919 1820289 ssh_runner.go:195] Run: ls
	I0127 13:20:13.196325 1820289 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 13:20:13.200846 1820289 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 13:20:13.200863 1820289 status.go:463] ha-021126-m03 apiserver status = Running (err=<nil>)
	I0127 13:20:13.200871 1820289 status.go:176] ha-021126-m03 status: &{Name:ha-021126-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:20:13.200885 1820289 status.go:174] checking status of ha-021126-m04 ...
	I0127 13:20:13.201180 1820289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:20:13.201219 1820289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:13.216850 1820289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0127 13:20:13.217332 1820289 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:13.217826 1820289 main.go:141] libmachine: Using API Version  1
	I0127 13:20:13.217846 1820289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:13.218161 1820289 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:13.218340 1820289 main.go:141] libmachine: (ha-021126-m04) Calling .GetState
	I0127 13:20:13.219899 1820289 status.go:371] ha-021126-m04 host status = "Running" (err=<nil>)
	I0127 13:20:13.219915 1820289 host.go:66] Checking if "ha-021126-m04" exists ...
	I0127 13:20:13.220191 1820289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:20:13.220228 1820289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:13.234471 1820289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36967
	I0127 13:20:13.234922 1820289 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:13.235406 1820289 main.go:141] libmachine: Using API Version  1
	I0127 13:20:13.235429 1820289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:13.235722 1820289 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:13.235911 1820289 main.go:141] libmachine: (ha-021126-m04) Calling .GetIP
	I0127 13:20:13.238262 1820289 main.go:141] libmachine: (ha-021126-m04) DBG | domain ha-021126-m04 has defined MAC address 52:54:00:9a:22:18 in network mk-ha-021126
	I0127 13:20:13.238650 1820289 main.go:141] libmachine: (ha-021126-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:22:18", ip: ""} in network mk-ha-021126: {Iface:virbr1 ExpiryTime:2025-01-27 14:17:43 +0000 UTC Type:0 Mac:52:54:00:9a:22:18 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-021126-m04 Clientid:01:52:54:00:9a:22:18}
	I0127 13:20:13.238682 1820289 main.go:141] libmachine: (ha-021126-m04) DBG | domain ha-021126-m04 has defined IP address 192.168.39.200 and MAC address 52:54:00:9a:22:18 in network mk-ha-021126
	I0127 13:20:13.238810 1820289 host.go:66] Checking if "ha-021126-m04" exists ...
	I0127 13:20:13.239155 1820289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:20:13.239193 1820289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:13.253379 1820289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0127 13:20:13.253725 1820289 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:13.254127 1820289 main.go:141] libmachine: Using API Version  1
	I0127 13:20:13.254145 1820289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:13.254442 1820289 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:13.254639 1820289 main.go:141] libmachine: (ha-021126-m04) Calling .DriverName
	I0127 13:20:13.254809 1820289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:20:13.254832 1820289 main.go:141] libmachine: (ha-021126-m04) Calling .GetSSHHostname
	I0127 13:20:13.257302 1820289 main.go:141] libmachine: (ha-021126-m04) DBG | domain ha-021126-m04 has defined MAC address 52:54:00:9a:22:18 in network mk-ha-021126
	I0127 13:20:13.257640 1820289 main.go:141] libmachine: (ha-021126-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:22:18", ip: ""} in network mk-ha-021126: {Iface:virbr1 ExpiryTime:2025-01-27 14:17:43 +0000 UTC Type:0 Mac:52:54:00:9a:22:18 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-021126-m04 Clientid:01:52:54:00:9a:22:18}
	I0127 13:20:13.257672 1820289 main.go:141] libmachine: (ha-021126-m04) DBG | domain ha-021126-m04 has defined IP address 192.168.39.200 and MAC address 52:54:00:9a:22:18 in network mk-ha-021126
	I0127 13:20:13.257800 1820289 main.go:141] libmachine: (ha-021126-m04) Calling .GetSSHPort
	I0127 13:20:13.257958 1820289 main.go:141] libmachine: (ha-021126-m04) Calling .GetSSHKeyPath
	I0127 13:20:13.258118 1820289 main.go:141] libmachine: (ha-021126-m04) Calling .GetSSHUsername
	I0127 13:20:13.258275 1820289 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/ha-021126-m04/id_rsa Username:docker}
	I0127 13:20:13.342729 1820289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:20:13.358376 1820289 status.go:176] ha-021126-m04 status: &{Name:ha-021126-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 node start m02 -v=7 --alsologtostderr
E0127 13:20:41.619358 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-021126 node start m02 -v=7 --alsologtostderr: (41.025566267s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (450.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-021126 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-021126 -v=7 --alsologtostderr
E0127 13:20:59.476499 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:15.616456 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:43.318621 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-021126 -v=7 --alsologtostderr: (4m34.112931854s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-021126 --wait=true -v=7 --alsologtostderr
E0127 13:25:41.619022 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:04.681590 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:15.616178 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-021126 --wait=true -v=7 --alsologtostderr: (2m56.771937186s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-021126
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (450.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-021126 node delete m03 -v=7 --alsologtostderr: (5.981191308s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 stop -v=7 --alsologtostderr
E0127 13:30:41.619617 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-021126 stop -v=7 --alsologtostderr: (4m32.602339484s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-021126 status -v=7 --alsologtostderr: exit status 7 (114.0173ms)

                                                
                                                
-- stdout --
	ha-021126
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-021126-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-021126-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:33:07.706659 1824242 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:33:07.706806 1824242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:33:07.706819 1824242 out.go:358] Setting ErrFile to fd 2...
	I0127 13:33:07.706824 1824242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:33:07.707005 1824242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 13:33:07.707193 1824242 out.go:352] Setting JSON to false
	I0127 13:33:07.707238 1824242 mustload.go:65] Loading cluster: ha-021126
	I0127 13:33:07.707353 1824242 notify.go:220] Checking for updates...
	I0127 13:33:07.707678 1824242 config.go:182] Loaded profile config "ha-021126": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:33:07.707700 1824242 status.go:174] checking status of ha-021126 ...
	I0127 13:33:07.708168 1824242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:33:07.708217 1824242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:07.728390 1824242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38063
	I0127 13:33:07.728875 1824242 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:07.729502 1824242 main.go:141] libmachine: Using API Version  1
	I0127 13:33:07.729533 1824242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:07.729856 1824242 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:07.730062 1824242 main.go:141] libmachine: (ha-021126) Calling .GetState
	I0127 13:33:07.731792 1824242 status.go:371] ha-021126 host status = "Stopped" (err=<nil>)
	I0127 13:33:07.731805 1824242 status.go:384] host is not running, skipping remaining checks
	I0127 13:33:07.731820 1824242 status.go:176] ha-021126 status: &{Name:ha-021126 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:33:07.731843 1824242 status.go:174] checking status of ha-021126-m02 ...
	I0127 13:33:07.732136 1824242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:33:07.732187 1824242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:07.747306 1824242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0127 13:33:07.747771 1824242 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:07.748259 1824242 main.go:141] libmachine: Using API Version  1
	I0127 13:33:07.748281 1824242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:07.748647 1824242 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:07.748809 1824242 main.go:141] libmachine: (ha-021126-m02) Calling .GetState
	I0127 13:33:07.750384 1824242 status.go:371] ha-021126-m02 host status = "Stopped" (err=<nil>)
	I0127 13:33:07.750402 1824242 status.go:384] host is not running, skipping remaining checks
	I0127 13:33:07.750410 1824242 status.go:176] ha-021126-m02 status: &{Name:ha-021126-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:33:07.750437 1824242 status.go:174] checking status of ha-021126-m04 ...
	I0127 13:33:07.750764 1824242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:33:07.750811 1824242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:07.765553 1824242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I0127 13:33:07.765996 1824242 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:07.766528 1824242 main.go:141] libmachine: Using API Version  1
	I0127 13:33:07.766553 1824242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:07.766897 1824242 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:07.767092 1824242 main.go:141] libmachine: (ha-021126-m04) Calling .GetState
	I0127 13:33:07.768536 1824242 status.go:371] ha-021126-m04 host status = "Stopped" (err=<nil>)
	I0127 13:33:07.768548 1824242 status.go:384] host is not running, skipping remaining checks
	I0127 13:33:07.768554 1824242 status.go:176] ha-021126-m04 status: &{Name:ha-021126-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (134.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-021126 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 13:33:15.616172 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:34:38.680452 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-021126 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m13.796937609s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (134.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-021126 --control-plane -v=7 --alsologtostderr
E0127 13:35:41.618937 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-021126 --control-plane -v=7 --alsologtostderr: (1m12.655635762s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-021126 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-624063 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-624063 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (55.836328687s)
--- PASS: TestJSONOutput/start/Command (55.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-624063 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-624063 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.47s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-624063 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-624063 --output=json --user=testUser: (6.465729398s)
--- PASS: TestJSONOutput/stop/Command (6.47s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-478541 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-478541 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.816579ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"266de36d-e34e-4b5a-9018-6e6310cf72e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-478541] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a32e8f8-d5ad-47ba-8536-bce3ec2e2e38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20327"}}
	{"specversion":"1.0","id":"8756b5d4-57e8-4e5a-a24f-400e5f2a8025","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"30f24897-0275-48d0-8b16-912e1de981ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig"}}
	{"specversion":"1.0","id":"43e288d0-6bb0-4664-b28a-5e41470bfbab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube"}}
	{"specversion":"1.0","id":"6cb25656-95ea-47b3-a4ea-1c13b21c84dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3d50f30a-ba1f-454a-a93e-9f16e8a48e2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"03408d3f-60ba-4e17-8bfa-a62f5f956b32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-478541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-478541
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-865465 --driver=kvm2  --container-runtime=containerd
E0127 13:38:15.618937 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-865465 --driver=kvm2  --container-runtime=containerd: (42.222655162s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-877462 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-877462 --driver=kvm2  --container-runtime=containerd: (44.095811705s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-865465
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-877462
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-877462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-877462
helpers_test.go:175: Cleaning up "first-865465" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-865465
--- PASS: TestMinikubeProfile (88.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-056326 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-056326 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (24.28338237s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-056326 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-056326 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-073769 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-073769 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.938521156s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073769 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073769 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-056326 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073769 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073769 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-073769
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-073769: (1.277878244s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-073769
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-073769: (22.934096502s)
--- PASS: TestMountStart/serial/RestartStopped (23.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073769 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073769 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-338182 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 13:40:41.618836 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-338182 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m50.571607362s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-338182 -- rollout status deployment/busybox: (3.501774112s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- exec busybox-58667487b6-q644z -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- exec busybox-58667487b6-vnqn9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- exec busybox-58667487b6-q644z -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- exec busybox-58667487b6-vnqn9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- exec busybox-58667487b6-q644z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- exec busybox-58667487b6-vnqn9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- exec busybox-58667487b6-q644z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- exec busybox-58667487b6-q644z -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- exec busybox-58667487b6-vnqn9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-338182 -- exec busybox-58667487b6-vnqn9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (60.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-338182 -v 3 --alsologtostderr
E0127 13:43:15.616938 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-338182 -v 3 --alsologtostderr: (59.689523784s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (60.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-338182 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp testdata/cp-test.txt multinode-338182:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp multinode-338182:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1290901398/001/cp-test_multinode-338182.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp multinode-338182:/home/docker/cp-test.txt multinode-338182-m02:/home/docker/cp-test_multinode-338182_multinode-338182-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m02 "sudo cat /home/docker/cp-test_multinode-338182_multinode-338182-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp multinode-338182:/home/docker/cp-test.txt multinode-338182-m03:/home/docker/cp-test_multinode-338182_multinode-338182-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m03 "sudo cat /home/docker/cp-test_multinode-338182_multinode-338182-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp testdata/cp-test.txt multinode-338182-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp multinode-338182-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1290901398/001/cp-test_multinode-338182-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp multinode-338182-m02:/home/docker/cp-test.txt multinode-338182:/home/docker/cp-test_multinode-338182-m02_multinode-338182.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182 "sudo cat /home/docker/cp-test_multinode-338182-m02_multinode-338182.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp multinode-338182-m02:/home/docker/cp-test.txt multinode-338182-m03:/home/docker/cp-test_multinode-338182-m02_multinode-338182-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m03 "sudo cat /home/docker/cp-test_multinode-338182-m02_multinode-338182-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp testdata/cp-test.txt multinode-338182-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp multinode-338182-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1290901398/001/cp-test_multinode-338182-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp multinode-338182-m03:/home/docker/cp-test.txt multinode-338182:/home/docker/cp-test_multinode-338182-m03_multinode-338182.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182 "sudo cat /home/docker/cp-test_multinode-338182-m03_multinode-338182.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 cp multinode-338182-m03:/home/docker/cp-test.txt multinode-338182-m02:/home/docker/cp-test_multinode-338182-m03_multinode-338182-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 ssh -n multinode-338182-m02 "sudo cat /home/docker/cp-test_multinode-338182-m03_multinode-338182-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-338182 node stop m03: (1.340793039s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-338182 status: exit status 7 (405.244507ms)

                                                
                                                
-- stdout --
	multinode-338182
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-338182-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-338182-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-338182 status --alsologtostderr: exit status 7 (410.083393ms)

                                                
                                                
-- stdout --
	multinode-338182
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-338182-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-338182-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:43:41.874224 1831923 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:43:41.874323 1831923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:43:41.874332 1831923 out.go:358] Setting ErrFile to fd 2...
	I0127 13:43:41.874336 1831923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:43:41.874515 1831923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 13:43:41.874667 1831923 out.go:352] Setting JSON to false
	I0127 13:43:41.874700 1831923 mustload.go:65] Loading cluster: multinode-338182
	I0127 13:43:41.874788 1831923 notify.go:220] Checking for updates...
	I0127 13:43:41.875143 1831923 config.go:182] Loaded profile config "multinode-338182": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:43:41.875166 1831923 status.go:174] checking status of multinode-338182 ...
	I0127 13:43:41.875565 1831923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:43:41.875603 1831923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:43:41.897103 1831923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36823
	I0127 13:43:41.897645 1831923 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:43:41.898274 1831923 main.go:141] libmachine: Using API Version  1
	I0127 13:43:41.898301 1831923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:43:41.898623 1831923 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:43:41.898830 1831923 main.go:141] libmachine: (multinode-338182) Calling .GetState
	I0127 13:43:41.900532 1831923 status.go:371] multinode-338182 host status = "Running" (err=<nil>)
	I0127 13:43:41.900550 1831923 host.go:66] Checking if "multinode-338182" exists ...
	I0127 13:43:41.900840 1831923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:43:41.900886 1831923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:43:41.915691 1831923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40873
	I0127 13:43:41.916022 1831923 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:43:41.916454 1831923 main.go:141] libmachine: Using API Version  1
	I0127 13:43:41.916474 1831923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:43:41.916760 1831923 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:43:41.916941 1831923 main.go:141] libmachine: (multinode-338182) Calling .GetIP
	I0127 13:43:41.919579 1831923 main.go:141] libmachine: (multinode-338182) DBG | domain multinode-338182 has defined MAC address 52:54:00:eb:9a:d0 in network mk-multinode-338182
	I0127 13:43:41.920002 1831923 main.go:141] libmachine: (multinode-338182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:9a:d0", ip: ""} in network mk-multinode-338182: {Iface:virbr1 ExpiryTime:2025-01-27 14:40:49 +0000 UTC Type:0 Mac:52:54:00:eb:9a:d0 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-338182 Clientid:01:52:54:00:eb:9a:d0}
	I0127 13:43:41.920042 1831923 main.go:141] libmachine: (multinode-338182) DBG | domain multinode-338182 has defined IP address 192.168.39.225 and MAC address 52:54:00:eb:9a:d0 in network mk-multinode-338182
	I0127 13:43:41.920131 1831923 host.go:66] Checking if "multinode-338182" exists ...
	I0127 13:43:41.920543 1831923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:43:41.920590 1831923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:43:41.936192 1831923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I0127 13:43:41.936620 1831923 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:43:41.937079 1831923 main.go:141] libmachine: Using API Version  1
	I0127 13:43:41.937103 1831923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:43:41.937387 1831923 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:43:41.937564 1831923 main.go:141] libmachine: (multinode-338182) Calling .DriverName
	I0127 13:43:41.937734 1831923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:43:41.937769 1831923 main.go:141] libmachine: (multinode-338182) Calling .GetSSHHostname
	I0127 13:43:41.940280 1831923 main.go:141] libmachine: (multinode-338182) DBG | domain multinode-338182 has defined MAC address 52:54:00:eb:9a:d0 in network mk-multinode-338182
	I0127 13:43:41.940666 1831923 main.go:141] libmachine: (multinode-338182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:9a:d0", ip: ""} in network mk-multinode-338182: {Iface:virbr1 ExpiryTime:2025-01-27 14:40:49 +0000 UTC Type:0 Mac:52:54:00:eb:9a:d0 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-338182 Clientid:01:52:54:00:eb:9a:d0}
	I0127 13:43:41.940700 1831923 main.go:141] libmachine: (multinode-338182) DBG | domain multinode-338182 has defined IP address 192.168.39.225 and MAC address 52:54:00:eb:9a:d0 in network mk-multinode-338182
	I0127 13:43:41.940765 1831923 main.go:141] libmachine: (multinode-338182) Calling .GetSSHPort
	I0127 13:43:41.940913 1831923 main.go:141] libmachine: (multinode-338182) Calling .GetSSHKeyPath
	I0127 13:43:41.941039 1831923 main.go:141] libmachine: (multinode-338182) Calling .GetSSHUsername
	I0127 13:43:41.941184 1831923 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/multinode-338182/id_rsa Username:docker}
	I0127 13:43:42.017078 1831923 ssh_runner.go:195] Run: systemctl --version
	I0127 13:43:42.022578 1831923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:43:42.036477 1831923 kubeconfig.go:125] found "multinode-338182" server: "https://192.168.39.225:8443"
	I0127 13:43:42.036513 1831923 api_server.go:166] Checking apiserver status ...
	I0127 13:43:42.036543 1831923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:43:42.048087 1831923 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1124/cgroup
	W0127 13:43:42.056623 1831923 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1124/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:43:42.056672 1831923 ssh_runner.go:195] Run: ls
	I0127 13:43:42.060564 1831923 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0127 13:43:42.064849 1831923 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0127 13:43:42.064869 1831923 status.go:463] multinode-338182 apiserver status = Running (err=<nil>)
	I0127 13:43:42.064877 1831923 status.go:176] multinode-338182 status: &{Name:multinode-338182 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:43:42.064894 1831923 status.go:174] checking status of multinode-338182-m02 ...
	I0127 13:43:42.065187 1831923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:43:42.065221 1831923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:43:42.080904 1831923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46743
	I0127 13:43:42.081387 1831923 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:43:42.081876 1831923 main.go:141] libmachine: Using API Version  1
	I0127 13:43:42.081896 1831923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:43:42.082201 1831923 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:43:42.082375 1831923 main.go:141] libmachine: (multinode-338182-m02) Calling .GetState
	I0127 13:43:42.083768 1831923 status.go:371] multinode-338182-m02 host status = "Running" (err=<nil>)
	I0127 13:43:42.083786 1831923 host.go:66] Checking if "multinode-338182-m02" exists ...
	I0127 13:43:42.084078 1831923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:43:42.084119 1831923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:43:42.099346 1831923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0127 13:43:42.099756 1831923 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:43:42.100260 1831923 main.go:141] libmachine: Using API Version  1
	I0127 13:43:42.100284 1831923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:43:42.100596 1831923 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:43:42.100778 1831923 main.go:141] libmachine: (multinode-338182-m02) Calling .GetIP
	I0127 13:43:42.103731 1831923 main.go:141] libmachine: (multinode-338182-m02) DBG | domain multinode-338182-m02 has defined MAC address 52:54:00:5d:1a:0b in network mk-multinode-338182
	I0127 13:43:42.104107 1831923 main.go:141] libmachine: (multinode-338182-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:1a:0b", ip: ""} in network mk-multinode-338182: {Iface:virbr1 ExpiryTime:2025-01-27 14:41:50 +0000 UTC Type:0 Mac:52:54:00:5d:1a:0b Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-338182-m02 Clientid:01:52:54:00:5d:1a:0b}
	I0127 13:43:42.104133 1831923 main.go:141] libmachine: (multinode-338182-m02) DBG | domain multinode-338182-m02 has defined IP address 192.168.39.122 and MAC address 52:54:00:5d:1a:0b in network mk-multinode-338182
	I0127 13:43:42.104278 1831923 host.go:66] Checking if "multinode-338182-m02" exists ...
	I0127 13:43:42.104594 1831923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:43:42.104643 1831923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:43:42.119904 1831923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0127 13:43:42.120358 1831923 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:43:42.120874 1831923 main.go:141] libmachine: Using API Version  1
	I0127 13:43:42.120901 1831923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:43:42.121209 1831923 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:43:42.121390 1831923 main.go:141] libmachine: (multinode-338182-m02) Calling .DriverName
	I0127 13:43:42.121555 1831923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:43:42.121580 1831923 main.go:141] libmachine: (multinode-338182-m02) Calling .GetSSHHostname
	I0127 13:43:42.124310 1831923 main.go:141] libmachine: (multinode-338182-m02) DBG | domain multinode-338182-m02 has defined MAC address 52:54:00:5d:1a:0b in network mk-multinode-338182
	I0127 13:43:42.124736 1831923 main.go:141] libmachine: (multinode-338182-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:1a:0b", ip: ""} in network mk-multinode-338182: {Iface:virbr1 ExpiryTime:2025-01-27 14:41:50 +0000 UTC Type:0 Mac:52:54:00:5d:1a:0b Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-338182-m02 Clientid:01:52:54:00:5d:1a:0b}
	I0127 13:43:42.124767 1831923 main.go:141] libmachine: (multinode-338182-m02) DBG | domain multinode-338182-m02 has defined IP address 192.168.39.122 and MAC address 52:54:00:5d:1a:0b in network mk-multinode-338182
	I0127 13:43:42.124902 1831923 main.go:141] libmachine: (multinode-338182-m02) Calling .GetSSHPort
	I0127 13:43:42.125061 1831923 main.go:141] libmachine: (multinode-338182-m02) Calling .GetSSHKeyPath
	I0127 13:43:42.125188 1831923 main.go:141] libmachine: (multinode-338182-m02) Calling .GetSSHUsername
	I0127 13:43:42.125296 1831923 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/multinode-338182-m02/id_rsa Username:docker}
	I0127 13:43:42.205056 1831923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:43:42.217731 1831923 status.go:176] multinode-338182-m02 status: &{Name:multinode-338182-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:43:42.217784 1831923 status.go:174] checking status of multinode-338182-m03 ...
	I0127 13:43:42.218087 1831923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:43:42.218131 1831923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:43:42.233847 1831923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35577
	I0127 13:43:42.234319 1831923 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:43:42.234844 1831923 main.go:141] libmachine: Using API Version  1
	I0127 13:43:42.234868 1831923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:43:42.235214 1831923 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:43:42.235452 1831923 main.go:141] libmachine: (multinode-338182-m03) Calling .GetState
	I0127 13:43:42.236868 1831923 status.go:371] multinode-338182-m03 host status = "Stopped" (err=<nil>)
	I0127 13:43:42.236881 1831923 status.go:384] host is not running, skipping remaining checks
	I0127 13:43:42.236888 1831923 status.go:176] multinode-338182-m03 status: &{Name:multinode-338182-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (34.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 node start m03 -v=7 --alsologtostderr
E0127 13:43:44.683804 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-338182 node start m03 -v=7 --alsologtostderr: (33.619076707s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (34.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (308.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-338182
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-338182
E0127 13:45:41.624487 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-338182: (3m2.702120642s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-338182 --wait=true -v=8 --alsologtostderr
E0127 13:48:15.616752 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-338182 --wait=true -v=8 --alsologtostderr: (2m5.660568918s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-338182
--- PASS: TestMultiNode/serial/RestartKeepsNodes (308.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-338182 node delete m03: (1.636468255s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 stop
E0127 13:50:41.624153 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:51:18.683956 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-338182 stop: (3m1.859297092s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-338182 status: exit status 7 (89.678786ms)

                                                
                                                
-- stdout --
	multinode-338182
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-338182-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-338182 status --alsologtostderr: exit status 7 (84.252414ms)

                                                
                                                
-- stdout --
	multinode-338182
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-338182-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:52:29.090417 1834684 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:52:29.090525 1834684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:52:29.090534 1834684 out.go:358] Setting ErrFile to fd 2...
	I0127 13:52:29.090538 1834684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:52:29.090690 1834684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 13:52:29.090913 1834684 out.go:352] Setting JSON to false
	I0127 13:52:29.090949 1834684 mustload.go:65] Loading cluster: multinode-338182
	I0127 13:52:29.090994 1834684 notify.go:220] Checking for updates...
	I0127 13:52:29.091334 1834684 config.go:182] Loaded profile config "multinode-338182": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:52:29.091353 1834684 status.go:174] checking status of multinode-338182 ...
	I0127 13:52:29.091760 1834684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:52:29.091808 1834684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:52:29.106722 1834684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44421
	I0127 13:52:29.107162 1834684 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:52:29.107681 1834684 main.go:141] libmachine: Using API Version  1
	I0127 13:52:29.107700 1834684 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:52:29.108145 1834684 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:52:29.108320 1834684 main.go:141] libmachine: (multinode-338182) Calling .GetState
	I0127 13:52:29.109830 1834684 status.go:371] multinode-338182 host status = "Stopped" (err=<nil>)
	I0127 13:52:29.109847 1834684 status.go:384] host is not running, skipping remaining checks
	I0127 13:52:29.109855 1834684 status.go:176] multinode-338182 status: &{Name:multinode-338182 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:52:29.109909 1834684 status.go:174] checking status of multinode-338182-m02 ...
	I0127 13:52:29.110330 1834684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 13:52:29.110377 1834684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:52:29.124892 1834684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0127 13:52:29.125252 1834684 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:52:29.125698 1834684 main.go:141] libmachine: Using API Version  1
	I0127 13:52:29.125719 1834684 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:52:29.126007 1834684 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:52:29.126212 1834684 main.go:141] libmachine: (multinode-338182-m02) Calling .GetState
	I0127 13:52:29.127939 1834684 status.go:371] multinode-338182-m02 host status = "Stopped" (err=<nil>)
	I0127 13:52:29.127956 1834684 status.go:384] host is not running, skipping remaining checks
	I0127 13:52:29.127963 1834684 status.go:176] multinode-338182-m02 status: &{Name:multinode-338182-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (91.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-338182 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 13:53:15.616376 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-338182 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m30.835106236s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-338182 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (91.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-338182
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-338182-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-338182-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (70.818131ms)

                                                
                                                
-- stdout --
	* [multinode-338182-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-338182-m02' is duplicated with machine name 'multinode-338182-m02' in profile 'multinode-338182'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-338182-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-338182-m03 --driver=kvm2  --container-runtime=containerd: (41.601293686s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-338182
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-338182: exit status 80 (206.457607ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-338182 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-338182-m03 already exists in multinode-338182-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-338182-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.74s)

                                                
                                    
x
+
TestPreload (233.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-871345 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0127 13:55:41.619688 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-871345 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m24.678773402s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-871345 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-871345 image pull gcr.io/k8s-minikube/busybox: (2.301032374s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-871345
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-871345: (1m30.953852937s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-871345 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0127 13:58:15.616820 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-871345 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (54.41338495s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-871345 image list
helpers_test.go:175: Cleaning up "test-preload-871345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-871345
--- PASS: TestPreload (233.45s)

                                                
                                    
x
+
TestScheduledStopUnix (111.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-601679 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-601679 --memory=2048 --driver=kvm2  --container-runtime=containerd: (39.634942817s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601679 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-601679 -n scheduled-stop-601679
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601679 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 13:59:18.228212 1806070 retry.go:31] will retry after 95.046µs: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.229383 1806070 retry.go:31] will retry after 144.443µs: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.230540 1806070 retry.go:31] will retry after 169.022µs: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.231662 1806070 retry.go:31] will retry after 475.45µs: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.232759 1806070 retry.go:31] will retry after 655.948µs: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.233886 1806070 retry.go:31] will retry after 1.017644ms: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.235012 1806070 retry.go:31] will retry after 909.543µs: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.236155 1806070 retry.go:31] will retry after 2.123571ms: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.239367 1806070 retry.go:31] will retry after 2.182811ms: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.242587 1806070 retry.go:31] will retry after 4.846723ms: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.247843 1806070 retry.go:31] will retry after 3.175503ms: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.252045 1806070 retry.go:31] will retry after 4.36011ms: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.257268 1806070 retry.go:31] will retry after 11.566627ms: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.269471 1806070 retry.go:31] will retry after 17.589896ms: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.287708 1806070 retry.go:31] will retry after 24.694316ms: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
I0127 13:59:18.312957 1806070 retry.go:31] will retry after 50.046822ms: open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/scheduled-stop-601679/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601679 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-601679 -n scheduled-stop-601679
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-601679
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601679 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0127 14:00:24.687002 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-601679
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-601679: exit status 7 (77.035913ms)

                                                
                                                
-- stdout --
	scheduled-stop-601679
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-601679 -n scheduled-stop-601679
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-601679 -n scheduled-stop-601679: exit status 7 (66.880255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-601679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-601679
--- PASS: TestScheduledStopUnix (111.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (163.28s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3387368520 start -p running-upgrade-868132 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3387368520 start -p running-upgrade-868132 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m8.205687232s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-868132 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-868132 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m31.579029998s)
helpers_test.go:175: Cleaning up "running-upgrade-868132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-868132
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-868132: (1.189706239s)
--- PASS: TestRunningBinaryUpgrade (163.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (186.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-420052 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-420052 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m20.011492974s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-420052
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-420052: (2.34629419s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-420052 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-420052 status --format={{.Host}}: exit status 7 (88.602278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-420052 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-420052 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (35.979133507s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-420052 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-420052 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-420052 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (82.457517ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-420052] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-420052
	    minikube start -p kubernetes-upgrade-420052 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4200522 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-420052 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-420052 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-420052 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m6.794908107s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-420052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-420052
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-420052: (1.395957139s)
--- PASS: TestKubernetesUpgrade (186.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987204 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-987204 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (78.767118ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-987204] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (112.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987204 --driver=kvm2  --container-runtime=containerd
E0127 14:00:41.621043 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987204 --driver=kvm2  --container-runtime=containerd: (1m52.607271711s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-987204 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (112.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-723599 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-723599 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (113.064065ms)

                                                
                                                
-- stdout --
	* [false-723599] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:01:54.454607 1839943 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:01:54.454913 1839943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:01:54.454923 1839943 out.go:358] Setting ErrFile to fd 2...
	I0127 14:01:54.454935 1839943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:01:54.455172 1839943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
	I0127 14:01:54.455850 1839943 out.go:352] Setting JSON to false
	I0127 14:01:54.456997 1839943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":38655,"bootTime":1737947859,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:01:54.457111 1839943 start.go:139] virtualization: kvm guest
	I0127 14:01:54.459346 1839943 out.go:177] * [false-723599] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:01:54.460566 1839943 notify.go:220] Checking for updates...
	I0127 14:01:54.460575 1839943 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:01:54.461789 1839943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:01:54.462877 1839943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
	I0127 14:01:54.463986 1839943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
	I0127 14:01:54.465072 1839943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:01:54.466279 1839943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:01:54.467912 1839943 config.go:182] Loaded profile config "NoKubernetes-987204": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:01:54.468097 1839943 config.go:182] Loaded profile config "cert-expiration-998838": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:01:54.468263 1839943 config.go:182] Loaded profile config "force-systemd-env-542756": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:01:54.468400 1839943 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:01:54.509181 1839943 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:01:54.510159 1839943 start.go:297] selected driver: kvm2
	I0127 14:01:54.510173 1839943 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:01:54.510185 1839943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:01:54.511832 1839943 out.go:201] 
	W0127 14:01:54.512924 1839943 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0127 14:01:54.514202 1839943 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-723599 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-723599" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:01:33 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.11:8443
name: cert-expiration-998838
contexts:
- context:
cluster: cert-expiration-998838
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:01:33 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-998838
name: cert-expiration-998838
current-context: cert-expiration-998838
kind: Config
preferences: {}
users:
- name: cert-expiration-998838
user:
client-certificate: /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/cert-expiration-998838/client.crt
client-key: /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/cert-expiration-998838/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-723599

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723599"

                                                
                                                
----------------------- debugLogs end: false-723599 [took: 3.005703607s] --------------------------------
helpers_test.go:175: Cleaning up "false-723599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-723599
--- PASS: TestNetworkPlugins/group/false (3.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (54.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987204 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987204 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (53.224017851s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-987204 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-987204 status -o json: exit status 2 (247.207622ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-987204","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-987204
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-987204: (1.017850293s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (54.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (71.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987204 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987204 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m11.618088167s)
--- PASS: TestNoKubernetes/serial/Start (71.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-987204 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-987204 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.206737ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (18.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.537841671s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.132532139s)
--- PASS: TestNoKubernetes/serial/ProfileList (18.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-987204
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-987204: (1.357781447s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (33.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987204 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987204 --driver=kvm2  --container-runtime=containerd: (33.295449496s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (33.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (130.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1521487719 start -p stopped-upgrade-436392 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1521487719 start -p stopped-upgrade-436392 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m6.360128405s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1521487719 -p stopped-upgrade-436392 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1521487719 -p stopped-upgrade-436392 stop: (1.317636203s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-436392 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-436392 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m3.179905803s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (130.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-987204 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-987204 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.490877ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestPause/serial/Start (112.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-963547 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-963547 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m52.332614231s)
--- PASS: TestPause/serial/Start (112.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (98.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0127 14:05:41.622261 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m38.788150213s)
--- PASS: TestNetworkPlugins/group/auto/Start (98.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m41.632517939s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-723599 "pgrep -a kubelet"
I0127 14:07:12.024908 1806070 config.go:182] Loaded profile config "auto-723599": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-723599 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-26zbx" [1af07aa3-6d07-47aa-9980-92ef9d680ceb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-26zbx" [1af07aa3-6d07-47aa-9980-92ef9d680ceb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003760588s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-963547 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-963547 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (46.642213352s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (46.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-436392
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (89.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m29.635936464s)
--- PASS: TestNetworkPlugins/group/calico/Start (89.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-723599 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0127 14:07:58.685957 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m31.474658051s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.47s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-963547 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-963547 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-963547 --output=json --layout=cluster: exit status 2 (245.093748ms)

                                                
                                                
-- stdout --
	{"Name":"pause-963547","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-963547","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-963547 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.77s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-963547 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vzhwb" [e2c65555-8c46-4928-b66e-9229312fb4f9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00500872s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-963547 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (94.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m34.456080103s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (94.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-723599 "pgrep -a kubelet"
I0127 14:08:10.370380 1806070 config.go:182] Loaded profile config "kindnet-723599": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-723599 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vb2t8" [e9b3c0c4-b373-4e47-807c-8a756fef9db6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vb2t8" [e9b3c0c4-b373-4e47-807c-8a756fef9db6] Running
E0127 14:08:15.616902 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004657053s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-723599 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m20.919497113s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kl9xs" [1babefba-94cc-45d5-84f6-25a866adac8f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005088396s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-723599 "pgrep -a kubelet"
I0127 14:08:57.014880 1806070 config.go:182] Loaded profile config "calico-723599": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-723599 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-r95q7" [17574db7-d7b6-4927-87e3-a75c3502e967] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-r95q7" [17574db7-d7b6-4927-87e3-a75c3502e967] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004971465s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-723599 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-723599 "pgrep -a kubelet"
I0127 14:09:11.852692 1806070 config.go:182] Loaded profile config "custom-flannel-723599": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-723599 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lkqtn" [b5336d42-236a-44d1-a40f-678f814753b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lkqtn" [b5336d42-236a-44d1-a40f-678f814753b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004428076s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-723599 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (59.639220883s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-723599 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-723599 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (183.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-908018 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
I0127 14:09:40.799577 1806070 config.go:182] Loaded profile config "enable-default-cni-723599": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-908018 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m3.844924041s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (183.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-723599 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tlmmq" [9dda34f0-419b-40e5-a90f-0643381038c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-tlmmq" [9dda34f0-419b-40e5-a90f-0643381038c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003676323s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-723599 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rtj2l" [b6a1931a-81af-4646-8b18-1dc0ae8e90cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004172367s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-723599 "pgrep -a kubelet"
I0127 14:10:05.067277 1806070 config.go:182] Loaded profile config "flannel-723599": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-723599 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8zq8v" [2fb1bf10-1899-466a-90d1-d9ef1b4b1299] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8zq8v" [2fb1bf10-1899-466a-90d1-d9ef1b4b1299] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00364688s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (110.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-591346 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-591346 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m50.030222542s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (110.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-723599 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-723599 "pgrep -a kubelet"
I0127 14:10:23.160405 1806070 config.go:182] Loaded profile config "bridge-723599": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-723599 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-nmzck" [e7d34e4e-6ab0-4d60-b509-f252e938743f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-nmzck" [e7d34e4e-6ab0-4d60-b509-f252e938743f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005075544s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-635679 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-635679 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m6.553723824s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-723599 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-723599 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E0127 14:19:40.348327 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:41.004537 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:58.822939 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:08.707914 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:23.554817 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:26.525798 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:28.606360 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:41.619394 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:51.256000 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:22:12.294234 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:22:44.745760 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:23:04.141642 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:23:12.448052 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:23:15.616057 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:23:50.785055 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:12.646433 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:38.688689 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:41.004737 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:58.823034 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:25:23.554579 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:25:41.618961 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:27:12.293896 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:27:44.745070 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:28:04.142062 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:28:15.616402 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:28:35.357024 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:28:50.784278 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:12.645763 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:27.206663 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:41.004871 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:58.822923 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:13.848291 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:23.554229 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:35.709773 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:41.619341 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:31:04.069420 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:31:21.888136 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:31:46.617400 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:32:12.293254 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:32:44.745123 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:04.141692 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:15.616854 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:44.690768 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:50.783980 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:07.809418 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:12.645786 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:41.004438 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:58.823482 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:23.554979 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:41.619675 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:37:12.293302 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:37:44.745097 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:04.141322 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:15.616282 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:50.784100 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:39:12.646317 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:39:41.004739 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:39:58.822907 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (109.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-212529 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-212529 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m49.336000565s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (109.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-635679 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e7ce5fe7-5558-404b-b34e-80900d600102] Pending
helpers_test.go:344: "busybox" [e7ce5fe7-5558-404b-b34e-80900d600102] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e7ce5fe7-5558-404b-b34e-80900d600102] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003902858s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-635679 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-635679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-635679 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-635679 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-635679 --alsologtostderr -v=3: (1m30.80672679s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-591346 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [45726287-5d6c-4e70-a29f-fca1c3f2269a] Pending
helpers_test.go:344: "busybox" [45726287-5d6c-4e70-a29f-fca1c3f2269a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [45726287-5d6c-4e70-a29f-fca1c3f2269a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004291212s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-591346 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-591346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-591346 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-591346 --alsologtostderr -v=3
E0127 14:12:12.293313 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:12.299678 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:12.311470 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:12.332807 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:12.374445 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:12.455891 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:12.617762 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:12.939749 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:13.582078 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:14.864033 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:17.425486 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:22.547310 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:32.789086 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-591346 --alsologtostderr -v=3: (1m31.33530325s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-212529 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [157c2c59-3e53-47cb-a7c3-a1861057c169] Pending
helpers_test.go:344: "busybox" [157c2c59-3e53-47cb-a7c3-a1861057c169] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [157c2c59-3e53-47cb-a7c3-a1861057c169] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004617609s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-212529 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-908018 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [52fe4eda-b6a3-4b97-a43f-fc9b67c97735] Pending
helpers_test.go:344: "busybox" [52fe4eda-b6a3-4b97-a43f-fc9b67c97735] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [52fe4eda-b6a3-4b97-a43f-fc9b67c97735] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004442798s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-908018 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-212529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-212529 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-212529 --alsologtostderr -v=3
E0127 14:12:53.270461 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-212529 --alsologtostderr -v=3: (1m31.281440115s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-908018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-908018 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-908018 --alsologtostderr -v=3
E0127 14:13:04.141598 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:04.148030 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:04.159354 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:04.180682 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:04.222271 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:04.303764 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:04.465367 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:04.787697 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:05.429196 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:06.710811 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:09.272689 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:14.394311 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:15.616553 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-908018 --alsologtostderr -v=3: (1m31.355680842s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-635679 -n embed-certs-635679
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-635679 -n embed-certs-635679: exit status 7 (76.270116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-635679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-591346 -n no-preload-591346
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-591346 -n no-preload-591346: exit status 7 (72.925695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-591346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-212529 -n default-k8s-diff-port-212529
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-212529 -n default-k8s-diff-port-212529: exit status 7 (73.451771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-212529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908018 -n old-k8s-version-908018
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908018 -n old-k8s-version-908018: exit status 7 (87.551457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-908018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (179.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-908018 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0127 14:14:31.761200 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:33.141265 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:41.004931 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:41.011316 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:41.022644 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:41.044022 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:41.085985 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:41.167524 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:41.329088 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:41.650928 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:42.292535 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:43.574718 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:46.136659 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:51.258565 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:53.623384 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:56.154283 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:58.823479 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:58.829869 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:58.841648 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:58.863023 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:58.904372 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:58.985809 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:59.147324 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:14:59.469204 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:00.111089 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:01.393424 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:01.500910 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:03.954932 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:09.076314 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:12.723406 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:19.317996 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:21.982559 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:23.554109 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:23.560466 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:23.571802 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:23.593933 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:23.635339 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:23.717338 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:23.878941 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:24.201113 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:24.842474 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:26.124236 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:28.685616 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:33.806935 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:34.584702 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:39.799987 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:41.618938 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:44.048479 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:48.002570 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:16:02.944486 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:16:04.530795 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:16:20.761796 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:16:34.644902 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:16:45.492501 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:16:56.506503 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:04.689335 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/addons-547451/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:12.293295 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:24.865943 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/enable-default-cni-723599/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-908018 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m59.015474831s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908018 -n old-k8s-version-908018
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (179.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-k4h4f" [88db28f8-f7bc-49b5-960d-8ef992e94e60] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005073791s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-k4h4f" [88db28f8-f7bc-49b5-960d-8ef992e94e60] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005355167s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-908018 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-908018 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-908018 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908018 -n old-k8s-version-908018
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908018 -n old-k8s-version-908018: exit status 2 (245.757806ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-908018 -n old-k8s-version-908018
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-908018 -n old-k8s-version-908018: exit status 2 (238.460755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-908018 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908018 -n old-k8s-version-908018
E0127 14:17:39.995616 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-908018 -n old-k8s-version-908018
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-309688 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 14:17:42.683896 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:44.745602 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:44.752008 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:44.763381 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:44.784724 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:44.826155 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:44.908009 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:45.069575 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:45.391403 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:46.032871 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:47.314995 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:49.876839 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:54.998622 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:18:04.141632 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:18:05.240402 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:18:07.414283 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/bridge-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:18:15.616113 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/functional-410576/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:18:25.722723 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:18:31.844938 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-309688 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (51.011091664s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-309688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-309688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.223627472s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-309688 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-309688 --alsologtostderr -v=3: (7.415709497s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-309688 -n newest-cni-309688
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-309688 -n newest-cni-309688: exit status 7 (79.071663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-309688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-309688 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 14:18:50.783699 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:06.684972 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/old-k8s-version-908018/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:12.646425 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/custom-flannel-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:18.486757 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/calico-723599/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-309688 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (37.991393285s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-309688 -n newest-cni-309688
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-309688 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-309688 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-309688 -n newest-cni-309688
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-309688 -n newest-cni-309688: exit status 2 (243.531898ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-309688 -n newest-cni-309688
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-309688 -n newest-cni-309688: exit status 2 (241.429745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-309688 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-309688 -n newest-cni-309688
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-309688 -n newest-cni-309688
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                    

Test skip (38/316)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.23
265 TestNetworkPlugins/group/cilium 3.33
280 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-723599 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-723599" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:01:33 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.11:8443
name: cert-expiration-998838
contexts:
- context:
cluster: cert-expiration-998838
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:01:33 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-998838
name: cert-expiration-998838
current-context: cert-expiration-998838
kind: Config
preferences: {}
users:
- name: cert-expiration-998838
user:
client-certificate: /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/cert-expiration-998838/client.crt
client-key: /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/cert-expiration-998838/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-723599

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723599"

                                                
                                                
----------------------- debugLogs end: kubenet-723599 [took: 3.087422321s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-723599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-723599
--- SKIP: TestNetworkPlugins/group/kubenet (3.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-723599 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-723599" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:01:33 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.11:8443
name: cert-expiration-998838
contexts:
- context:
cluster: cert-expiration-998838
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:01:33 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-998838
name: cert-expiration-998838
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-998838
user:
client-certificate: /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/cert-expiration-998838/client.crt
client-key: /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/cert-expiration-998838/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-723599

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-723599" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723599"

                                                
                                                
----------------------- debugLogs end: cilium-723599 [took: 3.188683573s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-723599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-723599
--- SKIP: TestNetworkPlugins/group/cilium (3.33s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-793240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-793240
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard