Test Report: KVM_Linux_containerd 20318

                    
                      dd22c410311484da6763aae43511cabe19037b94:2025-01-27:38092
                    
                

Test fail (3/316)

Order failed test Duration
358 TestStartStop/group/no-preload/serial/SecondStart 1596.02
360 TestStartStop/group/embed-certs/serial/SecondStart 1619.09
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 1629.91
x
+
TestStartStop/group/no-preload/serial/SecondStart (1596.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-215237 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 12:30:47.954311  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-215237 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m33.904231619s)

                                                
                                                
-- stdout --
	* [no-preload-215237] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-215237" primary control-plane node in "no-preload-215237" cluster
	* Restarting existing kvm2 VM for "no-preload-215237" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-215237 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:30:40.727312  532344 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:30:40.727428  532344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:30:40.727437  532344 out.go:358] Setting ErrFile to fd 2...
	I0127 12:30:40.727443  532344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:30:40.727651  532344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 12:30:40.728186  532344 out.go:352] Setting JSON to false
	I0127 12:30:40.729253  532344 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11584,"bootTime":1737969457,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:30:40.729347  532344 start.go:139] virtualization: kvm guest
	I0127 12:30:40.731301  532344 out.go:177] * [no-preload-215237] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:30:40.732412  532344 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:30:40.732410  532344 notify.go:220] Checking for updates...
	I0127 12:30:40.733506  532344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:30:40.734483  532344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:30:40.735546  532344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 12:30:40.736524  532344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:30:40.737455  532344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:30:40.738819  532344 config.go:182] Loaded profile config "no-preload-215237": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:30:40.739241  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:30:40.739308  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:30:40.754514  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34301
	I0127 12:30:40.755024  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:30:40.755618  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:30:40.755681  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:30:40.756076  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:30:40.756268  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:30:40.756497  532344 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:30:40.756868  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:30:40.756919  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:30:40.771021  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0127 12:30:40.771473  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:30:40.771933  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:30:40.771952  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:30:40.772224  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:30:40.772442  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:30:40.806602  532344 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:30:40.807876  532344 start.go:297] selected driver: kvm2
	I0127 12:30:40.807894  532344 start.go:901] validating driver "kvm2" against &{Name:no-preload-215237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-215237 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:30:40.807993  532344 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:30:40.808648  532344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:30:40.808721  532344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-471120/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:30:40.822917  532344 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:30:40.823297  532344 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:30:40.823329  532344 cni.go:84] Creating CNI manager for ""
	I0127 12:30:40.823374  532344 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:30:40.823421  532344 start.go:340] cluster config:
	{Name:no-preload-215237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-215237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:30:40.823511  532344 iso.go:125] acquiring lock: {Name:mkc6ca3cbb5528e67f6dc9da0188f358e9fee620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:30:40.825138  532344 out.go:177] * Starting "no-preload-215237" primary control-plane node in "no-preload-215237" cluster
	I0127 12:30:40.826418  532344 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:30:40.826528  532344 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/config.json ...
	I0127 12:30:40.826670  532344 cache.go:107] acquiring lock: {Name:mk55e556137b0c44eecbcafd8f1ad8d6d2235baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:30:40.826682  532344 cache.go:107] acquiring lock: {Name:mk821e1f96179d7c8829160b4eec213e789ee3c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:30:40.826691  532344 cache.go:107] acquiring lock: {Name:mk929031bf1a952c5b2751146f50732f4326ebe7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:30:40.826723  532344 start.go:360] acquireMachinesLock for no-preload-215237: {Name:mk818835aef0de701295cc2c98fea95e1be33202 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:30:40.826745  532344 cache.go:107] acquiring lock: {Name:mkf7c3fecb361dc165769bdeefaf93c09aa4c1a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:30:40.826753  532344 cache.go:107] acquiring lock: {Name:mka663f6d0ea2d905d4b82f301a92ab6cde3c40e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:30:40.826767  532344 start.go:364] duration metric: took 25.335µs to acquireMachinesLock for "no-preload-215237"
	I0127 12:30:40.826711  532344 cache.go:107] acquiring lock: {Name:mk837708656e0fcd1bce12e43d0e6bbb5fd34cfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:30:40.826775  532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 12:30:40.826783  532344 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:30:40.826790  532344 fix.go:54] fixHost starting: 
	I0127 12:30:40.826790  532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 12:30:40.826791  532344 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 122.972µs
	I0127 12:30:40.826802  532344 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 12:30:40.826778  532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 12:30:40.826816  532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 12:30:40.826816  532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 12:30:40.826817  532344 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 140.599µs
	I0127 12:30:40.826830  532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 12:30:40.826828  532344 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 127.926µs
	I0127 12:30:40.826838  532344 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 88.859µs
	I0127 12:30:40.826877  532344 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 12:30:40.826832  532344 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 12:30:40.826841  532344 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 12:30:40.826803  532344 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 142.867µs
	I0127 12:30:40.826897  532344 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 12:30:40.826830  532344 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 86.77µs
	I0127 12:30:40.826905  532344 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 12:30:40.826783  532344 cache.go:107] acquiring lock: {Name:mke910280a5e5f0cfff4ec3463b563cf11210087 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:30:40.826830  532344 cache.go:107] acquiring lock: {Name:mkd2a6bebb2f88e8eab599e070725a391f31a539 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:30:40.826938  532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 12:30:40.826950  532344 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 202.041µs
	I0127 12:30:40.826959  532344 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 12:30:40.826971  532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 12:30:40.826980  532344 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 201.579µs
	I0127 12:30:40.826992  532344 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 12:30:40.827004  532344 cache.go:87] Successfully saved all images to host disk.
	I0127 12:30:40.827136  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:30:40.827181  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:30:40.840594  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I0127 12:30:40.841066  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:30:40.841617  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:30:40.841637  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:30:40.841959  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:30:40.842165  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:30:40.842301  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
	I0127 12:30:40.843824  532344 fix.go:112] recreateIfNeeded on no-preload-215237: state=Stopped err=<nil>
	I0127 12:30:40.843852  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	W0127 12:30:40.843991  532344 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:30:40.845738  532344 out.go:177] * Restarting existing kvm2 VM for "no-preload-215237" ...
	I0127 12:30:40.846939  532344 main.go:141] libmachine: (no-preload-215237) Calling .Start
	I0127 12:30:40.848043  532344 main.go:141] libmachine: (no-preload-215237) starting domain...
	I0127 12:30:40.848079  532344 main.go:141] libmachine: (no-preload-215237) ensuring networks are active...
	I0127 12:30:40.848690  532344 main.go:141] libmachine: (no-preload-215237) Ensuring network default is active
	I0127 12:30:40.849048  532344 main.go:141] libmachine: (no-preload-215237) Ensuring network mk-no-preload-215237 is active
	I0127 12:30:40.849478  532344 main.go:141] libmachine: (no-preload-215237) getting domain XML...
	I0127 12:30:40.850299  532344 main.go:141] libmachine: (no-preload-215237) creating domain...
	I0127 12:30:42.033031  532344 main.go:141] libmachine: (no-preload-215237) waiting for IP...
	I0127 12:30:42.033824  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:42.034251  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:42.034346  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:42.034240  532380 retry.go:31] will retry after 216.227621ms: waiting for domain to come up
	I0127 12:30:42.251883  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:42.252518  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:42.252551  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:42.252472  532380 retry.go:31] will retry after 259.03318ms: waiting for domain to come up
	I0127 12:30:42.513108  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:42.513658  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:42.513690  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:42.513561  532380 retry.go:31] will retry after 328.428662ms: waiting for domain to come up
	I0127 12:30:42.844239  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:42.844721  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:42.844756  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:42.844680  532380 retry.go:31] will retry after 527.092813ms: waiting for domain to come up
	I0127 12:30:43.373357  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:43.373864  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:43.373886  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:43.373823  532380 retry.go:31] will retry after 704.763548ms: waiting for domain to come up
	I0127 12:30:44.079794  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:44.080321  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:44.080357  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:44.080285  532380 retry.go:31] will retry after 929.711084ms: waiting for domain to come up
	I0127 12:30:45.011401  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:45.011920  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:45.011953  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:45.011876  532380 retry.go:31] will retry after 1.164341882s: waiting for domain to come up
	I0127 12:30:46.177513  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:46.178005  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:46.178033  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:46.177963  532380 retry.go:31] will retry after 1.423725356s: waiting for domain to come up
	I0127 12:30:47.602746  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:47.603179  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:47.603205  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:47.603155  532380 retry.go:31] will retry after 1.393685643s: waiting for domain to come up
	I0127 12:30:48.998707  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:48.999209  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:48.999248  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:48.999158  532380 retry.go:31] will retry after 1.514373112s: waiting for domain to come up
	I0127 12:30:50.516002  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:50.516491  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:50.516528  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:50.516429  532380 retry.go:31] will retry after 2.407396715s: waiting for domain to come up
	I0127 12:30:52.926548  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:52.927029  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:52.927060  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:52.926981  532380 retry.go:31] will retry after 2.617026411s: waiting for domain to come up
	I0127 12:30:55.546865  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:55.547487  532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
	I0127 12:30:55.547512  532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:55.547433  532380 retry.go:31] will retry after 3.886989093s: waiting for domain to come up
	I0127 12:30:59.438919  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.439387  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has current primary IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.439416  532344 main.go:141] libmachine: (no-preload-215237) found domain IP: 192.168.72.127
	I0127 12:30:59.439429  532344 main.go:141] libmachine: (no-preload-215237) reserving static IP address...
	I0127 12:30:59.439874  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "no-preload-215237", mac: "52:54:00:f8:56:01", ip: "192.168.72.127"} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:30:59.439903  532344 main.go:141] libmachine: (no-preload-215237) DBG | skip adding static IP to network mk-no-preload-215237 - found existing host DHCP lease matching {name: "no-preload-215237", mac: "52:54:00:f8:56:01", ip: "192.168.72.127"}
	I0127 12:30:59.439918  532344 main.go:141] libmachine: (no-preload-215237) reserved static IP address 192.168.72.127 for domain no-preload-215237
	I0127 12:30:59.439933  532344 main.go:141] libmachine: (no-preload-215237) waiting for SSH...
	I0127 12:30:59.439945  532344 main.go:141] libmachine: (no-preload-215237) DBG | Getting to WaitForSSH function...
	I0127 12:30:59.441927  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.442276  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:30:59.442301  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.442422  532344 main.go:141] libmachine: (no-preload-215237) DBG | Using SSH client type: external
	I0127 12:30:59.442438  532344 main.go:141] libmachine: (no-preload-215237) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa (-rw-------)
	I0127 12:30:59.442510  532344 main.go:141] libmachine: (no-preload-215237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:30:59.442536  532344 main.go:141] libmachine: (no-preload-215237) DBG | About to run SSH command:
	I0127 12:30:59.442551  532344 main.go:141] libmachine: (no-preload-215237) DBG | exit 0
	I0127 12:30:59.567981  532344 main.go:141] libmachine: (no-preload-215237) DBG | SSH cmd err, output: <nil>: 
	I0127 12:30:59.568339  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetConfigRaw
	I0127 12:30:59.568989  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetIP
	I0127 12:30:59.571592  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.571959  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:30:59.571989  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.572273  532344 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/config.json ...
	I0127 12:30:59.572469  532344 machine.go:93] provisionDockerMachine start ...
	I0127 12:30:59.572497  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:30:59.572706  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:30:59.574838  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.575239  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:30:59.575263  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.575397  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:30:59.575571  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:30:59.575727  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:30:59.575896  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:30:59.576055  532344 main.go:141] libmachine: Using SSH client type: native
	I0127 12:30:59.576315  532344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.127 22 <nil> <nil>}
	I0127 12:30:59.576332  532344 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:30:59.684121  532344 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:30:59.684143  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetMachineName
	I0127 12:30:59.684363  532344 buildroot.go:166] provisioning hostname "no-preload-215237"
	I0127 12:30:59.684395  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetMachineName
	I0127 12:30:59.684563  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:30:59.687017  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.687498  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:30:59.687519  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.687688  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:30:59.687882  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:30:59.688033  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:30:59.688149  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:30:59.688400  532344 main.go:141] libmachine: Using SSH client type: native
	I0127 12:30:59.688606  532344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.127 22 <nil> <nil>}
	I0127 12:30:59.688620  532344 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-215237 && echo "no-preload-215237" | sudo tee /etc/hostname
	I0127 12:30:59.809126  532344 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-215237
	
	I0127 12:30:59.809160  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:30:59.811730  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.812034  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:30:59.812065  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.812279  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:30:59.812479  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:30:59.812666  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:30:59.812823  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:30:59.812975  532344 main.go:141] libmachine: Using SSH client type: native
	I0127 12:30:59.813154  532344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.127 22 <nil> <nil>}
	I0127 12:30:59.813177  532344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-215237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-215237/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-215237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:30:59.928174  532344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:30:59.928216  532344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-471120/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-471120/.minikube}
	I0127 12:30:59.928250  532344 buildroot.go:174] setting up certificates
	I0127 12:30:59.928266  532344 provision.go:84] configureAuth start
	I0127 12:30:59.928289  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetMachineName
	I0127 12:30:59.928558  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetIP
	I0127 12:30:59.931047  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.931432  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:30:59.931458  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.931628  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:30:59.933683  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.934054  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:30:59.934084  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:30:59.934225  532344 provision.go:143] copyHostCerts
	I0127 12:30:59.934287  532344 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem, removing ...
	I0127 12:30:59.934312  532344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem
	I0127 12:30:59.934391  532344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem (1679 bytes)
	I0127 12:30:59.934498  532344 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem, removing ...
	I0127 12:30:59.934509  532344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem
	I0127 12:30:59.934546  532344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem (1082 bytes)
	I0127 12:30:59.934622  532344 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem, removing ...
	I0127 12:30:59.934632  532344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem
	I0127 12:30:59.934665  532344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem (1123 bytes)
	I0127 12:30:59.934735  532344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem org=jenkins.no-preload-215237 san=[127.0.0.1 192.168.72.127 localhost minikube no-preload-215237]
	I0127 12:31:00.052134  532344 provision.go:177] copyRemoteCerts
	I0127 12:31:00.052197  532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:31:00.052224  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:31:00.054597  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.054994  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:31:00.055028  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.055188  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:31:00.055385  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:31:00.055557  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:31:00.055685  532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
	I0127 12:31:00.138123  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:31:00.159235  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 12:31:00.179466  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:31:00.201071  532344 provision.go:87] duration metric: took 272.788555ms to configureAuth
	I0127 12:31:00.201093  532344 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:31:00.201247  532344 config.go:182] Loaded profile config "no-preload-215237": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:31:00.201256  532344 machine.go:96] duration metric: took 628.773488ms to provisionDockerMachine
	I0127 12:31:00.201264  532344 start.go:293] postStartSetup for "no-preload-215237" (driver="kvm2")
	I0127 12:31:00.201274  532344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:31:00.201301  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:31:00.201610  532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:31:00.201640  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:31:00.204042  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.204384  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:31:00.204411  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.204567  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:31:00.204782  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:31:00.204951  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:31:00.205111  532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
	I0127 12:31:00.290264  532344 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:31:00.294177  532344 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:31:00.294205  532344 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/addons for local assets ...
	I0127 12:31:00.294280  532344 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/files for local assets ...
	I0127 12:31:00.294371  532344 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem -> 4783872.pem in /etc/ssl/certs
	I0127 12:31:00.294486  532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:31:00.303136  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:31:00.323875  532344 start.go:296] duration metric: took 122.599026ms for postStartSetup
	I0127 12:31:00.323915  532344 fix.go:56] duration metric: took 19.497125621s for fixHost
	I0127 12:31:00.323936  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:31:00.326361  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.326682  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:31:00.326707  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.326913  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:31:00.327092  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:31:00.327242  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:31:00.327360  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:31:00.327496  532344 main.go:141] libmachine: Using SSH client type: native
	I0127 12:31:00.327673  532344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.127 22 <nil> <nil>}
	I0127 12:31:00.327684  532344 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:31:00.436970  532344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981060.412662770
	
	I0127 12:31:00.436997  532344 fix.go:216] guest clock: 1737981060.412662770
	I0127 12:31:00.437004  532344 fix.go:229] Guest: 2025-01-27 12:31:00.41266277 +0000 UTC Remote: 2025-01-27 12:31:00.323919122 +0000 UTC m=+19.633267258 (delta=88.743648ms)
	I0127 12:31:00.437024  532344 fix.go:200] guest clock delta is within tolerance: 88.743648ms
	I0127 12:31:00.437028  532344 start.go:83] releasing machines lock for "no-preload-215237", held for 19.610253908s
	I0127 12:31:00.437048  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:31:00.437336  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetIP
	I0127 12:31:00.440013  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.440380  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:31:00.440416  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.440580  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:31:00.441102  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:31:00.441284  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:31:00.441380  532344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:31:00.441431  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:31:00.441489  532344 ssh_runner.go:195] Run: cat /version.json
	I0127 12:31:00.441522  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:31:00.443822  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.443874  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.444218  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:31:00.444251  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:31:00.444272  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.444340  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:00.444466  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:31:00.444612  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:31:00.444687  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:31:00.444783  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:31:00.444839  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:31:00.444925  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:31:00.444987  532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
	I0127 12:31:00.445082  532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
	I0127 12:31:00.534498  532344 ssh_runner.go:195] Run: systemctl --version
	I0127 12:31:00.564683  532344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:31:00.569691  532344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:31:00.569752  532344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:31:00.583888  532344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:31:00.583909  532344 start.go:495] detecting cgroup driver to use...
	I0127 12:31:00.583974  532344 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:31:00.613953  532344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:31:00.625976  532344 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:31:00.626021  532344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:31:00.638192  532344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:31:00.650175  532344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:31:00.764972  532344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:31:00.889875  532344 docker.go:233] disabling docker service ...
	I0127 12:31:00.889955  532344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:31:00.903369  532344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:31:00.914933  532344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:31:01.045889  532344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:31:01.175748  532344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:31:01.187756  532344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:31:01.205407  532344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:31:01.214753  532344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:31:01.223968  532344 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:31:01.224018  532344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:31:01.233281  532344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:31:01.242430  532344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:31:01.251772  532344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:31:01.260995  532344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:31:01.270440  532344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:31:01.279739  532344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:31:01.288816  532344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:31:01.298104  532344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:31:01.306211  532344 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:31:01.306255  532344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:31:01.318407  532344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:31:01.326978  532344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:31:01.446085  532344 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:31:01.472453  532344 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 12:31:01.472530  532344 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:31:01.477101  532344 retry.go:31] will retry after 1.31059768s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 12:31:02.788604  532344 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:31:02.793855  532344 start.go:563] Will wait 60s for crictl version
	I0127 12:31:02.793909  532344 ssh_runner.go:195] Run: which crictl
	I0127 12:31:02.797452  532344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:31:02.841844  532344 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 12:31:02.841918  532344 ssh_runner.go:195] Run: containerd --version
	I0127 12:31:02.868423  532344 ssh_runner.go:195] Run: containerd --version
	I0127 12:31:02.892306  532344 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 12:31:02.893458  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetIP
	I0127 12:31:02.896603  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:02.897044  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:31:02.897077  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:31:02.897311  532344 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 12:31:02.901184  532344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:31:02.913317  532344 kubeadm.go:883] updating cluster {Name:no-preload-215237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-215237 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:31:02.913471  532344 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:31:02.913539  532344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:31:02.943808  532344 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:31:02.943828  532344 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:31:02.943837  532344 kubeadm.go:934] updating node { 192.168.72.127 8443 v1.32.1 containerd true true} ...
	I0127 12:31:02.943928  532344 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-215237 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-215237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:31:02.943982  532344 ssh_runner.go:195] Run: sudo crictl info
	I0127 12:31:02.974803  532344 cni.go:84] Creating CNI manager for ""
	I0127 12:31:02.974824  532344 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:31:02.974834  532344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:31:02.974857  532344 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.127 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-215237 NodeName:no-preload-215237 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:31:02.974956  532344 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-215237"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.127"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.127"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:31:02.975009  532344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:31:02.984012  532344 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:31:02.984070  532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:31:02.992339  532344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0127 12:31:03.007404  532344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:31:03.022118  532344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
	I0127 12:31:03.036811  532344 ssh_runner.go:195] Run: grep 192.168.72.127	control-plane.minikube.internal$ /etc/hosts
	I0127 12:31:03.040003  532344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.127	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:31:03.051232  532344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:31:03.172247  532344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:31:03.192551  532344 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237 for IP: 192.168.72.127
	I0127 12:31:03.192572  532344 certs.go:194] generating shared ca certs ...
	I0127 12:31:03.192588  532344 certs.go:226] acquiring lock for ca certs: {Name:mk02d117412837bd489768267e2b174e6c3ff6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:31:03.192793  532344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key
	I0127 12:31:03.192854  532344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key
	I0127 12:31:03.192868  532344 certs.go:256] generating profile certs ...
	I0127 12:31:03.192984  532344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/client.key
	I0127 12:31:03.193064  532344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/apiserver.key.8184fc12
	I0127 12:31:03.193114  532344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/proxy-client.key
	I0127 12:31:03.193270  532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem (1338 bytes)
	W0127 12:31:03.193309  532344 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387_empty.pem, impossibly tiny 0 bytes
	I0127 12:31:03.193323  532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:31:03.193356  532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:31:03.193385  532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:31:03.193417  532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem (1679 bytes)
	I0127 12:31:03.193467  532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:31:03.194073  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:31:03.227604  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:31:03.254585  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:31:03.283266  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:31:03.319723  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 12:31:03.363597  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:31:03.396059  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:31:03.418199  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:31:03.442707  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem --> /usr/share/ca-certificates/478387.pem (1338 bytes)
	I0127 12:31:03.464702  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /usr/share/ca-certificates/4783872.pem (1708 bytes)
	I0127 12:31:03.486822  532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:31:03.508475  532344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:31:03.523647  532344 ssh_runner.go:195] Run: openssl version
	I0127 12:31:03.528893  532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:31:03.538561  532344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:31:03.542628  532344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:31:03.542669  532344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:31:03.547997  532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:31:03.557978  532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478387.pem && ln -fs /usr/share/ca-certificates/478387.pem /etc/ssl/certs/478387.pem"
	I0127 12:31:03.573483  532344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478387.pem
	I0127 12:31:03.579430  532344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:28 /usr/share/ca-certificates/478387.pem
	I0127 12:31:03.579469  532344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478387.pem
	I0127 12:31:03.588347  532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478387.pem /etc/ssl/certs/51391683.0"
	I0127 12:31:03.600641  532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783872.pem && ln -fs /usr/share/ca-certificates/4783872.pem /etc/ssl/certs/4783872.pem"
	I0127 12:31:03.611232  532344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783872.pem
	I0127 12:31:03.615436  532344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:28 /usr/share/ca-certificates/4783872.pem
	I0127 12:31:03.615490  532344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783872.pem
	I0127 12:31:03.621133  532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783872.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:31:03.631880  532344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:31:03.636131  532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:31:03.641537  532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:31:03.646536  532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:31:03.651569  532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:31:03.656580  532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:31:03.661815  532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:31:03.666949  532344 kubeadm.go:392] StartCluster: {Name:no-preload-215237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-215237 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:03.667067  532344 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 12:31:03.667112  532344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:31:03.708632  532344 cri.go:89] found id: "505fbc8803f52aaa2369926620791e9fb0143e36efc8bec95e264ca00ba0f4a3"
	I0127 12:31:03.708661  532344 cri.go:89] found id: "67c0fdaeaf8f805761aef519e5ce8d7a18b84954fe7f4904252cc174d46814b9"
	I0127 12:31:03.708673  532344 cri.go:89] found id: "869180061925aafeaed8b69400b166337c3f8922002e5df9120dd5175199cf18"
	I0127 12:31:03.708679  532344 cri.go:89] found id: "f91fd7915e8528354c9e64a05d2b5a648e7580257c383e77d00fec62ed750130"
	I0127 12:31:03.708683  532344 cri.go:89] found id: "3364d982e4cb9b77ea1ca4ee8d2c5f727fe3acca7ecc7819307ceb2267df4383"
	I0127 12:31:03.708688  532344 cri.go:89] found id: "f15b855ef6d97b29741de5c05f40d90165b778820a07496bedddcbfee47d05b0"
	I0127 12:31:03.708692  532344 cri.go:89] found id: "ace9d66a69af8df3d795c60a20b47c0f64075c460dcdc20966f2f52e635484d7"
	I0127 12:31:03.708696  532344 cri.go:89] found id: ""
	I0127 12:31:03.708768  532344 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 12:31:03.723216  532344 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T12:31:03Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 12:31:03.723286  532344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:31:03.732749  532344 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:31:03.732773  532344 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:31:03.732834  532344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:31:03.742030  532344 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:31:03.742751  532344 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-215237" does not appear in /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:31:03.743297  532344 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-471120/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-215237" cluster setting kubeconfig missing "no-preload-215237" context setting]
	I0127 12:31:03.743962  532344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:31:03.745759  532344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:31:03.754320  532344 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.127
	I0127 12:31:03.754348  532344 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:31:03.754360  532344 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 12:31:03.754410  532344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:31:03.796303  532344 cri.go:89] found id: "505fbc8803f52aaa2369926620791e9fb0143e36efc8bec95e264ca00ba0f4a3"
	I0127 12:31:03.796334  532344 cri.go:89] found id: "67c0fdaeaf8f805761aef519e5ce8d7a18b84954fe7f4904252cc174d46814b9"
	I0127 12:31:03.796340  532344 cri.go:89] found id: "869180061925aafeaed8b69400b166337c3f8922002e5df9120dd5175199cf18"
	I0127 12:31:03.796345  532344 cri.go:89] found id: "f91fd7915e8528354c9e64a05d2b5a648e7580257c383e77d00fec62ed750130"
	I0127 12:31:03.796349  532344 cri.go:89] found id: "3364d982e4cb9b77ea1ca4ee8d2c5f727fe3acca7ecc7819307ceb2267df4383"
	I0127 12:31:03.796357  532344 cri.go:89] found id: "f15b855ef6d97b29741de5c05f40d90165b778820a07496bedddcbfee47d05b0"
	I0127 12:31:03.796361  532344 cri.go:89] found id: "ace9d66a69af8df3d795c60a20b47c0f64075c460dcdc20966f2f52e635484d7"
	I0127 12:31:03.796365  532344 cri.go:89] found id: ""
	I0127 12:31:03.796373  532344 cri.go:252] Stopping containers: [505fbc8803f52aaa2369926620791e9fb0143e36efc8bec95e264ca00ba0f4a3 67c0fdaeaf8f805761aef519e5ce8d7a18b84954fe7f4904252cc174d46814b9 869180061925aafeaed8b69400b166337c3f8922002e5df9120dd5175199cf18 f91fd7915e8528354c9e64a05d2b5a648e7580257c383e77d00fec62ed750130 3364d982e4cb9b77ea1ca4ee8d2c5f727fe3acca7ecc7819307ceb2267df4383 f15b855ef6d97b29741de5c05f40d90165b778820a07496bedddcbfee47d05b0 ace9d66a69af8df3d795c60a20b47c0f64075c460dcdc20966f2f52e635484d7]
	I0127 12:31:03.796432  532344 ssh_runner.go:195] Run: which crictl
	I0127 12:31:03.800254  532344 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 505fbc8803f52aaa2369926620791e9fb0143e36efc8bec95e264ca00ba0f4a3 67c0fdaeaf8f805761aef519e5ce8d7a18b84954fe7f4904252cc174d46814b9 869180061925aafeaed8b69400b166337c3f8922002e5df9120dd5175199cf18 f91fd7915e8528354c9e64a05d2b5a648e7580257c383e77d00fec62ed750130 3364d982e4cb9b77ea1ca4ee8d2c5f727fe3acca7ecc7819307ceb2267df4383 f15b855ef6d97b29741de5c05f40d90165b778820a07496bedddcbfee47d05b0 ace9d66a69af8df3d795c60a20b47c0f64075c460dcdc20966f2f52e635484d7
	I0127 12:31:03.832801  532344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:31:03.848490  532344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:31:03.858673  532344 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:31:03.858693  532344 kubeadm.go:157] found existing configuration files:
	
	I0127 12:31:03.858738  532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:31:03.867322  532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:31:03.867371  532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:31:03.875833  532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:31:03.884170  532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:31:03.884209  532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:31:03.892639  532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:31:03.900809  532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:31:03.900859  532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:31:03.909231  532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:31:03.917997  532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:31:03.918046  532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:31:03.927395  532344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:31:03.937153  532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:04.054712  532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:04.780572  532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:04.989545  532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:05.068231  532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:05.167638  532344 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:31:05.167744  532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:05.667821  532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:06.168324  532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:06.196097  532344 api_server.go:72] duration metric: took 1.028459805s to wait for apiserver process to appear ...
	I0127 12:31:06.196132  532344 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:31:06.196166  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:31:06.196920  532344 api_server.go:269] stopped: https://192.168.72.127:8443/healthz: Get "https://192.168.72.127:8443/healthz": dial tcp 192.168.72.127:8443: connect: connection refused
	I0127 12:31:06.696590  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:31:08.684891  532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:31:08.684939  532344 api_server.go:103] status: https://192.168.72.127:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:31:08.684960  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:31:08.723267  532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:31:08.723300  532344 api_server.go:103] status: https://192.168.72.127:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:31:08.723318  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:31:08.733845  532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:31:08.733876  532344 api_server.go:103] status: https://192.168.72.127:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:31:09.196471  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:31:09.201015  532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:31:09.201038  532344 api_server.go:103] status: https://192.168.72.127:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:31:09.696253  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:31:09.701316  532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:31:09.701345  532344 api_server.go:103] status: https://192.168.72.127:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:31:10.197092  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:31:10.205140  532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 200:
	ok
	I0127 12:31:10.213238  532344 api_server.go:141] control plane version: v1.32.1
	I0127 12:31:10.213264  532344 api_server.go:131] duration metric: took 4.017123672s to wait for apiserver health ...
	I0127 12:31:10.213274  532344 cni.go:84] Creating CNI manager for ""
	I0127 12:31:10.213280  532344 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:31:10.214831  532344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:31:10.216111  532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:31:10.228338  532344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:31:10.257329  532344 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:31:10.281510  532344 system_pods.go:59] 8 kube-system pods found
	I0127 12:31:10.281564  532344 system_pods.go:61] "coredns-668d6bf9bc-zh42j" [dcebb6c7-6360-408e-b1bf-0fa75706d01b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:31:10.281579  532344 system_pods.go:61] "etcd-no-preload-215237" [351bdcb1-e57f-452f-ac15-c919dbd85236] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:31:10.281597  532344 system_pods.go:61] "kube-apiserver-no-preload-215237" [31345d0f-59eb-4d21-b652-aa42121f6172] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:31:10.281610  532344 system_pods.go:61] "kube-controller-manager-no-preload-215237" [afe7df6f-3e38-43b9-92b0-fa0cc894da1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:31:10.281620  532344 system_pods.go:61] "kube-proxy-4bwrn" [959b8095-1cf8-4883-97fc-8cee826fe012] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 12:31:10.281631  532344 system_pods.go:61] "kube-scheduler-no-preload-215237" [43bb4154-1617-43a3-b721-9a7eae31bc1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:31:10.281645  532344 system_pods.go:61] "metrics-server-f79f97bbb-57422" [a3b4a3bd-65a5-4f98-9143-30f6bae7c691] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:31:10.281657  532344 system_pods.go:61] "storage-provisioner" [95a9ba7c-5fe2-4436-95a5-3d7cec947a22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:31:10.281665  532344 system_pods.go:74] duration metric: took 24.311549ms to wait for pod list to return data ...
	I0127 12:31:10.281680  532344 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:31:10.284847  532344 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:31:10.284874  532344 node_conditions.go:123] node cpu capacity is 2
	I0127 12:31:10.284889  532344 node_conditions.go:105] duration metric: took 3.200244ms to run NodePressure ...
	I0127 12:31:10.284912  532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:10.548277  532344 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 12:31:10.553575  532344 kubeadm.go:739] kubelet initialised
	I0127 12:31:10.553596  532344 kubeadm.go:740] duration metric: took 5.291701ms waiting for restarted kubelet to initialise ...
	I0127 12:31:10.553606  532344 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:31:10.561135  532344 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:12.578507  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:15.068380  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:17.568005  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:20.073351  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:22.568605  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:23.068873  532344 pod_ready.go:93] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:23.068898  532344 pod_ready.go:82] duration metric: took 12.507743159s for pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:23.068907  532344 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:23.073880  532344 pod_ready.go:93] pod "etcd-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:23.073904  532344 pod_ready.go:82] duration metric: took 4.987182ms for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:23.073916  532344 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:23.078751  532344 pod_ready.go:93] pod "kube-apiserver-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:23.078772  532344 pod_ready.go:82] duration metric: took 4.848497ms for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:23.078782  532344 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:23.083332  532344 pod_ready.go:93] pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:23.083355  532344 pod_ready.go:82] duration metric: took 4.564246ms for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:23.083366  532344 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4bwrn" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:23.087407  532344 pod_ready.go:93] pod "kube-proxy-4bwrn" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:23.087425  532344 pod_ready.go:82] duration metric: took 4.051963ms for pod "kube-proxy-4bwrn" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:23.087435  532344 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:25.093397  532344 pod_ready.go:103] pod "kube-scheduler-no-preload-215237" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:26.094833  532344 pod_ready.go:93] pod "kube-scheduler-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:26.094861  532344 pod_ready.go:82] duration metric: took 3.007417278s for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:26.094875  532344 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:28.101352  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:30.601585  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:32.604139  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:35.102293  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:37.600754  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:40.100905  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:42.100991  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:44.101855  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:46.101913  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:48.102821  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:50.103463  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:52.602228  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:55.101407  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:57.603161  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:00.101889  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:02.600319  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:04.602188  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:06.602936  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:09.101176  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:11.102753  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:13.602362  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:15.821577  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:18.103079  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:20.601901  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:23.100745  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:25.101367  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:27.601511  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:30.101270  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:32.101710  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:34.101744  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:36.102085  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:38.601443  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:41.101544  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:43.101863  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:45.601264  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:48.100828  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:50.100933  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:52.101287  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:54.101793  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:56.101838  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:58.601337  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:01.101215  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:03.601070  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:06.100670  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:08.100799  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:10.100841  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:12.600560  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:14.601258  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:16.601363  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:19.100864  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:21.101528  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:23.101689  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:25.602231  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:28.102076  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:30.601547  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:33.100663  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:35.101055  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:37.601056  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:39.601616  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:41.601758  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:44.102003  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:46.601023  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:49.100200  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:51.100924  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:53.601516  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:55.601588  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:57.602867  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:00.101588  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:02.601244  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:04.602040  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:07.100767  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:09.104987  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:11.601623  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:14.100809  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:16.601152  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:19.100846  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:21.101788  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:23.102642  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:25.601649  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:28.100668  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:30.100960  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:32.101019  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:34.604268  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:37.101197  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:39.101280  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:41.102630  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:43.600441  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:45.601204  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:47.602586  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:50.101098  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:52.101401  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:54.601925  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:57.101945  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:59.102108  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:01.601264  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:04.101446  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:06.600799  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:09.100973  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:11.102079  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:13.103016  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:15.602006  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:18.102362  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:20.601666  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:22.601820  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:25.106352  532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:26.095020  532344 pod_ready.go:82] duration metric: took 4m0.000127968s for pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:26.095050  532344 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 12:35:26.095079  532344 pod_ready.go:39] duration metric: took 4m15.54146268s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:26.095114  532344 kubeadm.go:597] duration metric: took 4m22.362333931s to restartPrimaryControlPlane
	W0127 12:35:26.095189  532344 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:35:26.095218  532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 12:35:27.761272  532344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.666028034s)
	I0127 12:35:27.761357  532344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:35:27.776204  532344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:35:27.786547  532344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:35:27.796338  532344 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:35:27.796364  532344 kubeadm.go:157] found existing configuration files:
	
	I0127 12:35:27.796421  532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:35:27.806214  532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:35:27.806277  532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:35:27.817923  532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:35:27.828012  532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:35:27.828079  532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:35:27.837315  532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:35:27.848052  532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:35:27.848106  532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:35:27.860234  532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:35:27.872361  532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:35:27.872422  532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:35:27.885106  532344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:35:27.934225  532344 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:35:27.934331  532344 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:35:28.041622  532344 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:35:28.041807  532344 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:35:28.041967  532344 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:35:28.048826  532344 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:35:28.051333  532344 out.go:235]   - Generating certificates and keys ...
	I0127 12:35:28.051432  532344 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:35:28.051514  532344 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:35:28.051625  532344 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:35:28.051703  532344 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:35:28.051797  532344 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:35:28.051868  532344 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:35:28.051950  532344 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:35:28.052033  532344 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:35:28.052143  532344 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:35:28.052246  532344 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:35:28.052297  532344 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:35:28.052371  532344 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:35:28.501590  532344 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:35:28.683534  532344 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:35:28.769933  532344 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:35:28.921369  532344 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:35:28.988234  532344 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:35:28.988795  532344 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:35:28.992437  532344 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:35:28.993990  532344 out.go:235]   - Booting up control plane ...
	I0127 12:35:28.994125  532344 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:35:28.994275  532344 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:35:28.994434  532344 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:35:29.013469  532344 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:35:29.020349  532344 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:35:29.020452  532344 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:35:29.162116  532344 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:35:29.162239  532344 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:35:30.161829  532344 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001098337s
	I0127 12:35:30.161949  532344 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:35:34.663734  532344 kubeadm.go:310] [api-check] The API server is healthy after 4.502057638s
	I0127 12:35:34.684263  532344 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:35:34.700836  532344 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:35:34.730827  532344 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:35:34.731121  532344 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-215237 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:35:34.741724  532344 kubeadm.go:310] [bootstrap-token] Using token: tfwuw1.vs4tk3z0lrym6pr2
	I0127 12:35:34.742999  532344 out.go:235]   - Configuring RBAC rules ...
	I0127 12:35:34.743147  532344 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:35:34.749364  532344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:35:34.759443  532344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:35:34.764392  532344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:35:34.768628  532344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:35:34.772602  532344 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:35:35.071966  532344 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:35:35.500583  532344 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:35:36.073445  532344 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:35:36.075332  532344 kubeadm.go:310] 
	I0127 12:35:36.075428  532344 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:35:36.075445  532344 kubeadm.go:310] 
	I0127 12:35:36.075540  532344 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:35:36.075550  532344 kubeadm.go:310] 
	I0127 12:35:36.075586  532344 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:35:36.075671  532344 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:35:36.075755  532344 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:35:36.075769  532344 kubeadm.go:310] 
	I0127 12:35:36.075846  532344 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:35:36.075860  532344 kubeadm.go:310] 
	I0127 12:35:36.075922  532344 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:35:36.075935  532344 kubeadm.go:310] 
	I0127 12:35:36.076003  532344 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:35:36.076102  532344 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:35:36.076224  532344 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:35:36.076304  532344 kubeadm.go:310] 
	I0127 12:35:36.076429  532344 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:35:36.076586  532344 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:35:36.076613  532344 kubeadm.go:310] 
	I0127 12:35:36.076710  532344 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tfwuw1.vs4tk3z0lrym6pr2 \
	I0127 12:35:36.076899  532344 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
	I0127 12:35:36.076933  532344 kubeadm.go:310] 	--control-plane 
	I0127 12:35:36.076940  532344 kubeadm.go:310] 
	I0127 12:35:36.077034  532344 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:35:36.077045  532344 kubeadm.go:310] 
	I0127 12:35:36.077154  532344 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tfwuw1.vs4tk3z0lrym6pr2 \
	I0127 12:35:36.077287  532344 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 
	I0127 12:35:36.078154  532344 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:35:36.078355  532344 cni.go:84] Creating CNI manager for ""
	I0127 12:35:36.078379  532344 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:35:36.080448  532344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:35:36.081599  532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:35:36.097221  532344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:35:36.116819  532344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:35:36.116867  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:36.116885  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-215237 minikube.k8s.io/updated_at=2025_01_27T12_35_36_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=no-preload-215237 minikube.k8s.io/primary=true
	I0127 12:35:36.411048  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:36.411073  532344 ops.go:34] apiserver oom_adj: -16
	I0127 12:35:36.911315  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:37.411248  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:37.911876  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:38.411669  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:38.912069  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:39.412135  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:39.911694  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:40.411784  532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:40.580164  532344 kubeadm.go:1113] duration metric: took 4.463356481s to wait for elevateKubeSystemPrivileges
	I0127 12:35:40.580215  532344 kubeadm.go:394] duration metric: took 4m36.913272534s to StartCluster
	I0127 12:35:40.580240  532344 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:40.580344  532344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:35:40.581635  532344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:40.581867  532344 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:35:40.581994  532344 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:35:40.582133  532344 addons.go:69] Setting storage-provisioner=true in profile "no-preload-215237"
	I0127 12:35:40.582159  532344 addons.go:238] Setting addon storage-provisioner=true in "no-preload-215237"
	I0127 12:35:40.582165  532344 config.go:182] Loaded profile config "no-preload-215237": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:35:40.582184  532344 addons.go:69] Setting metrics-server=true in profile "no-preload-215237"
	I0127 12:35:40.582195  532344 addons.go:69] Setting default-storageclass=true in profile "no-preload-215237"
	W0127 12:35:40.582174  532344 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:35:40.582207  532344 addons.go:69] Setting dashboard=true in profile "no-preload-215237"
	I0127 12:35:40.582230  532344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-215237"
	I0127 12:35:40.582239  532344 addons.go:238] Setting addon metrics-server=true in "no-preload-215237"
	W0127 12:35:40.582256  532344 addons.go:247] addon metrics-server should already be in state true
	I0127 12:35:40.582272  532344 host.go:66] Checking if "no-preload-215237" exists ...
	I0127 12:35:40.582295  532344 host.go:66] Checking if "no-preload-215237" exists ...
	I0127 12:35:40.582243  532344 addons.go:238] Setting addon dashboard=true in "no-preload-215237"
	W0127 12:35:40.582332  532344 addons.go:247] addon dashboard should already be in state true
	I0127 12:35:40.582361  532344 host.go:66] Checking if "no-preload-215237" exists ...
	I0127 12:35:40.582677  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:40.582680  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:40.582718  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:40.582751  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:40.582795  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:40.582837  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:40.583090  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:40.583137  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:40.583649  532344 out.go:177] * Verifying Kubernetes components...
	I0127 12:35:40.584826  532344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:40.600033  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I0127 12:35:40.600495  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:40.601101  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:35:40.601140  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:40.601548  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:40.601781  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
	I0127 12:35:40.602959  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0127 12:35:40.603116  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
	I0127 12:35:40.603517  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:40.603557  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:40.603576  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0127 12:35:40.604106  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:35:40.604110  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:35:40.604131  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:40.604166  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:40.604237  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:40.604574  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:40.604574  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:40.604748  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:35:40.604773  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:40.605148  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:40.605190  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:40.605298  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:40.605350  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:40.605426  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:40.605560  532344 addons.go:238] Setting addon default-storageclass=true in "no-preload-215237"
	W0127 12:35:40.605581  532344 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:35:40.605610  532344 host.go:66] Checking if "no-preload-215237" exists ...
	I0127 12:35:40.605961  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:40.606003  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:40.606008  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:40.606124  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:40.622385  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33229
	I0127 12:35:40.622402  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39055
	I0127 12:35:40.622785  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:40.622902  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:40.623405  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:35:40.623425  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:40.623426  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:35:40.623444  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:40.623807  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:40.624012  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
	I0127 12:35:40.624084  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:40.624295  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
	I0127 12:35:40.625020  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0127 12:35:40.625761  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:40.626202  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:35:40.626233  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0127 12:35:40.626815  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:35:40.627424  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:35:40.627447  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:40.627766  532344 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:35:40.627810  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:40.628024  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:40.628336  532344 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:35:40.628494  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:35:40.628762  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:40.628625  532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:40.628856  532344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:40.629181  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:40.629838  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:35:40.629857  532344 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:35:40.629878  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:35:40.630595  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
	I0127 12:35:40.630632  532344 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:35:40.631966  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:35:40.631995  532344 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:35:40.632018  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:35:40.633361  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:35:40.633759  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:35:40.634423  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:35:40.634453  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:35:40.634769  532344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:35:40.634919  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:35:40.635154  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:35:40.635360  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:35:40.635498  532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
	I0127 12:35:40.636051  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:35:40.636213  532344 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:35:40.636227  532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:35:40.636243  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:35:40.636517  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:35:40.636548  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:35:40.636753  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:35:40.637014  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:35:40.637233  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:35:40.637418  532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
	I0127 12:35:40.639612  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:35:40.640039  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:35:40.640087  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:35:40.640197  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:35:40.640387  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:35:40.640530  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:35:40.640693  532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
	I0127 12:35:40.647815  532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0127 12:35:40.648197  532344 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:40.648682  532344 main.go:141] libmachine: Using API Version  1
	I0127 12:35:40.648709  532344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:40.649176  532344 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:40.649396  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
	I0127 12:35:40.651079  532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
	I0127 12:35:40.651315  532344 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:35:40.651335  532344 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:35:40.651361  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
	I0127 12:35:40.654639  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:35:40.655085  532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
	I0127 12:35:40.655104  532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
	I0127 12:35:40.655257  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
	I0127 12:35:40.655465  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
	I0127 12:35:40.655631  532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
	I0127 12:35:40.655792  532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
	I0127 12:35:40.799070  532344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:35:40.816802  532344 node_ready.go:35] waiting up to 6m0s for node "no-preload-215237" to be "Ready" ...
	I0127 12:35:40.842677  532344 node_ready.go:49] node "no-preload-215237" has status "Ready":"True"
	I0127 12:35:40.842703  532344 node_ready.go:38] duration metric: took 25.862086ms for node "no-preload-215237" to be "Ready" ...
	I0127 12:35:40.842716  532344 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:40.853263  532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:40.876376  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:35:40.876407  532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:35:40.898870  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:35:40.903314  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:35:40.916620  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:35:40.916649  532344 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:35:41.067992  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:35:41.068023  532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:35:41.072700  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:35:41.072728  532344 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:35:41.155398  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:35:41.155426  532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:35:41.194887  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:35:41.230877  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:35:41.230909  532344 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:35:41.313376  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:35:41.313400  532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:35:41.442010  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:35:41.442049  532344 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:35:41.486996  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:35:41.487028  532344 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:35:41.616020  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:35:41.616057  532344 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:35:41.690855  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:35:41.690886  532344 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:35:41.720821  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:35:41.720851  532344 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:35:41.754849  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:35:41.990168  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091255427s)
	I0127 12:35:41.990220  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086878371s)
	I0127 12:35:41.990249  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990262  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990249  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990370  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990668  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.990683  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.990719  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.990725  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.990733  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.990747  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990758  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990821  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.990734  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990857  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.991027  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.991042  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.992412  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.992462  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.992477  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.004951  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.004969  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.005238  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.005254  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.005271  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:42.472191  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.277235038s)
	I0127 12:35:42.472268  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.472283  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.472619  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:42.472665  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.472683  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.472697  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.472706  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.472985  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.473012  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.473024  532344 addons.go:479] Verifying addon metrics-server=true in "no-preload-215237"
	I0127 12:35:42.890307  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:43.165047  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.410145551s)
	I0127 12:35:43.165103  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:43.165123  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:43.165633  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:43.165657  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:43.165676  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:43.165692  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:43.165705  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:43.165941  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:43.165957  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:43.167364  532344 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-215237 addons enable metrics-server
	
	I0127 12:35:43.168535  532344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 12:35:43.169652  532344 addons.go:514] duration metric: took 2.587685868s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 12:35:45.359702  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:46.359497  532344 pod_ready.go:93] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:46.359531  532344 pod_ready.go:82] duration metric: took 5.506181911s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:46.359547  532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.867744  532344 pod_ready.go:93] pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.867773  532344 pod_ready.go:82] duration metric: took 1.508215371s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.867785  532344 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.872748  532344 pod_ready.go:93] pod "etcd-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.872769  532344 pod_ready.go:82] duration metric: took 4.975217ms for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.872782  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.879135  532344 pod_ready.go:93] pod "kube-apiserver-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.879153  532344 pod_ready.go:82] duration metric: took 6.364009ms for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.879170  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.884792  532344 pod_ready.go:93] pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.884809  532344 pod_ready.go:82] duration metric: took 5.632068ms for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.884817  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.957535  532344 pod_ready.go:93] pod "kube-proxy-bbnm2" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.957564  532344 pod_ready.go:82] duration metric: took 72.739132ms for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.957577  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:48.358062  532344 pod_ready.go:93] pod "kube-scheduler-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:48.358087  532344 pod_ready.go:82] duration metric: took 400.502078ms for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:48.358095  532344 pod_ready.go:39] duration metric: took 7.515367235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:48.358124  532344 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:35:48.358180  532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:48.381657  532344 api_server.go:72] duration metric: took 7.799751759s to wait for apiserver process to appear ...
	I0127 12:35:48.381684  532344 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:35:48.381704  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:35:48.387590  532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 200:
	ok
	I0127 12:35:48.388765  532344 api_server.go:141] control plane version: v1.32.1
	I0127 12:35:48.388787  532344 api_server.go:131] duration metric: took 7.09706ms to wait for apiserver health ...
	I0127 12:35:48.388795  532344 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:35:48.560605  532344 system_pods.go:59] 9 kube-system pods found
	I0127 12:35:48.560642  532344 system_pods.go:61] "coredns-668d6bf9bc-v9stn" [011e6981-39d0-4fa1-bf1b-3d1e06c7c71a] Running
	I0127 12:35:48.560650  532344 system_pods.go:61] "coredns-668d6bf9bc-wwb9p" [0a034560-980a-40fb-9603-be18d02b6f05] Running
	I0127 12:35:48.560656  532344 system_pods.go:61] "etcd-no-preload-215237" [8b9ab7f2-224f-4373-9dc2-fa794a60d922] Running
	I0127 12:35:48.560659  532344 system_pods.go:61] "kube-apiserver-no-preload-215237" [064e0d8e-5d82-42bb-979d-cd0e9aa13f56] Running
	I0127 12:35:48.560663  532344 system_pods.go:61] "kube-controller-manager-no-preload-215237" [dd9c190f-c01e-4fa7-b033-57463b032d30] Running
	I0127 12:35:48.560666  532344 system_pods.go:61] "kube-proxy-bbnm2" [dd89ae69-6ad2-44cb-9c80-ba5529e22dc1] Running
	I0127 12:35:48.560671  532344 system_pods.go:61] "kube-scheduler-no-preload-215237" [41c25fba-7af8-4e0e-b96d-57be786d703c] Running
	I0127 12:35:48.560680  532344 system_pods.go:61] "metrics-server-f79f97bbb-lqck5" [3447c2da-cbb0-412c-a8d9-2be32c8e6dad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:35:48.560686  532344 system_pods.go:61] "storage-provisioner" [9627d136-2ecb-4cc3-969d-b62de2261147] Running
	I0127 12:35:48.560696  532344 system_pods.go:74] duration metric: took 171.894881ms to wait for pod list to return data ...
	I0127 12:35:48.560709  532344 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:35:48.760164  532344 default_sa.go:45] found service account: "default"
	I0127 12:35:48.760270  532344 default_sa.go:55] duration metric: took 199.548191ms for default service account to be created ...
	I0127 12:35:48.760295  532344 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:35:48.961828  532344 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-215237 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215237 -n no-preload-215237
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-215237 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-215237 logs -n 25: (1.268182745s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-858845        | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC | 27 Jan 25 12:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC | 27 Jan 25 12:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-215237                  | no-preload-215237            | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC | 27 Jan 25 12:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-215237                                   | no-preload-215237            | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-346100                 | embed-certs-346100           | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-346100                                  | embed-certs-346100           | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-887672       | default-k8s-diff-port-887672 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-887672 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC |                     |
	|         | default-k8s-diff-port-887672                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-858845             | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-858845 image                           | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	| delete  | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	| start   | -p newest-cni-610630 --memory=2200 --alsologtostderr   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-610630             | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-610630                  | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-610630 --memory=2200 --alsologtostderr   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:36 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-610630 image list                           | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	| delete  | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:35:43
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:35:43.059479  534894 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:35:43.059651  534894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:35:43.059664  534894 out.go:358] Setting ErrFile to fd 2...
	I0127 12:35:43.059671  534894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:35:43.059931  534894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 12:35:43.061091  534894 out.go:352] Setting JSON to false
	I0127 12:35:43.062772  534894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11886,"bootTime":1737969457,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:35:43.062914  534894 start.go:139] virtualization: kvm guest
	I0127 12:35:43.064927  534894 out.go:177] * [newest-cni-610630] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:35:43.066246  534894 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:35:43.066268  534894 notify.go:220] Checking for updates...
	I0127 12:35:43.068595  534894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:35:43.069716  534894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:35:43.070810  534894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 12:35:43.071853  534894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:35:43.072978  534894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:35:43.074838  534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:35:43.075450  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:43.075519  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:43.091909  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35965
	I0127 12:35:43.093149  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:43.093802  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:35:43.093834  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:43.094269  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:43.094579  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:35:43.094848  534894 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:35:43.095161  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:43.095202  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:43.110695  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0127 12:35:43.111212  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:43.111903  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:35:43.111935  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:43.112295  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:43.112533  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:35:43.153545  534894 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:35:40.799070  532344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:35:40.816802  532344 node_ready.go:35] waiting up to 6m0s for node "no-preload-215237" to be "Ready" ...
	I0127 12:35:40.842677  532344 node_ready.go:49] node "no-preload-215237" has status "Ready":"True"
	I0127 12:35:40.842703  532344 node_ready.go:38] duration metric: took 25.862086ms for node "no-preload-215237" to be "Ready" ...
	I0127 12:35:40.842716  532344 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:40.853263  532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:40.876376  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:35:40.876407  532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:35:40.898870  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:35:40.903314  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:35:40.916620  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:35:40.916649  532344 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:35:41.067992  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:35:41.068023  532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:35:41.072700  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:35:41.072728  532344 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:35:41.155398  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:35:41.155426  532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:35:41.194887  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:35:41.230877  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:35:41.230909  532344 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:35:41.313376  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:35:41.313400  532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:35:41.442010  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:35:41.442049  532344 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:35:41.486996  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:35:41.487028  532344 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:35:41.616020  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:35:41.616057  532344 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:35:41.690855  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:35:41.690886  532344 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:35:41.720821  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:35:41.720851  532344 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:35:41.754849  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:35:41.990168  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091255427s)
	I0127 12:35:41.990220  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086878371s)
	I0127 12:35:41.990249  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990262  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990249  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990370  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990668  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.990683  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.990719  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.990725  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.990733  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.990747  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990758  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990821  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.990734  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990857  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.991027  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.991042  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.992412  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.992462  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.992477  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.004951  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.004969  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.005238  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.005254  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.005271  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:42.472191  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.277235038s)
	I0127 12:35:42.472268  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.472283  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.472619  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:42.472665  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.472683  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.472697  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.472706  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.472985  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.473012  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.473024  532344 addons.go:479] Verifying addon metrics-server=true in "no-preload-215237"
	I0127 12:35:42.890307  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:43.165047  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.410145551s)
	I0127 12:35:43.165103  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:43.165123  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:43.165633  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:43.165657  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:43.165676  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:43.165692  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:43.165705  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:43.165941  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:43.165957  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:43.167364  532344 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-215237 addons enable metrics-server
	
	I0127 12:35:43.168535  532344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 12:35:43.154513  534894 start.go:297] selected driver: kvm2
	I0127 12:35:43.154531  534894 start.go:901] validating driver "kvm2" against &{Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:35:43.154653  534894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:35:43.155362  534894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:35:43.155469  534894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-471120/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:35:43.172617  534894 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:35:43.173026  534894 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 12:35:43.173063  534894 cni.go:84] Creating CNI manager for ""
	I0127 12:35:43.173110  534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:35:43.173145  534894 start.go:340] cluster config:
	{Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:35:43.173269  534894 iso.go:125] acquiring lock: {Name:mkc6ca3cbb5528e67f6dc9da0188f358e9fee620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:35:43.174747  534894 out.go:177] * Starting "newest-cni-610630" primary control-plane node in "newest-cni-610630" cluster
	I0127 12:35:43.175803  534894 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:35:43.175846  534894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 12:35:43.175857  534894 cache.go:56] Caching tarball of preloaded images
	I0127 12:35:43.175957  534894 preload.go:172] Found /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:35:43.175970  534894 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 12:35:43.176077  534894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/config.json ...
	I0127 12:35:43.176271  534894 start.go:360] acquireMachinesLock for newest-cni-610630: {Name:mk818835aef0de701295cc2c98fea95e1be33202 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:35:43.176324  534894 start.go:364] duration metric: took 32.573µs to acquireMachinesLock for "newest-cni-610630"
	I0127 12:35:43.176345  534894 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:35:43.176356  534894 fix.go:54] fixHost starting: 
	I0127 12:35:43.176686  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:43.176750  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:43.191549  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37401
	I0127 12:35:43.191935  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:43.192419  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:35:43.192448  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:43.192934  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:43.193138  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:35:43.193300  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:35:43.195116  534894 fix.go:112] recreateIfNeeded on newest-cni-610630: state=Stopped err=<nil>
	I0127 12:35:43.195141  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	W0127 12:35:43.195320  534894 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:35:43.196456  534894 out.go:177] * Restarting existing kvm2 VM for "newest-cni-610630" ...
	I0127 12:35:43.169652  532344 addons.go:514] duration metric: took 2.587685868s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 12:35:45.359702  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:42.352585  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:44.353035  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:46.353087  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:44.707430  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:46.708896  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:43.197457  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Start
	I0127 12:35:43.197621  534894 main.go:141] libmachine: (newest-cni-610630) starting domain...
	I0127 12:35:43.197646  534894 main.go:141] libmachine: (newest-cni-610630) ensuring networks are active...
	I0127 12:35:43.198412  534894 main.go:141] libmachine: (newest-cni-610630) Ensuring network default is active
	I0127 12:35:43.198762  534894 main.go:141] libmachine: (newest-cni-610630) Ensuring network mk-newest-cni-610630 is active
	I0127 12:35:43.199182  534894 main.go:141] libmachine: (newest-cni-610630) getting domain XML...
	I0127 12:35:43.199981  534894 main.go:141] libmachine: (newest-cni-610630) creating domain...
	I0127 12:35:44.514338  534894 main.go:141] libmachine: (newest-cni-610630) waiting for IP...
	I0127 12:35:44.515307  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:44.515803  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:44.515875  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:44.515771  534929 retry.go:31] will retry after 248.83242ms: waiting for domain to come up
	I0127 12:35:44.766511  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:44.767046  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:44.767081  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:44.767011  534929 retry.go:31] will retry after 381.268975ms: waiting for domain to come up
	I0127 12:35:45.149680  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:45.150281  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:45.150314  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:45.150226  534929 retry.go:31] will retry after 435.74049ms: waiting for domain to come up
	I0127 12:35:45.587978  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:45.588682  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:45.588719  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:45.588634  534929 retry.go:31] will retry after 577.775914ms: waiting for domain to come up
	I0127 12:35:46.168596  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:46.169297  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:46.169332  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:46.169238  534929 retry.go:31] will retry after 539.718923ms: waiting for domain to come up
	I0127 12:35:46.711082  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:46.711652  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:46.711676  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:46.711635  534929 retry.go:31] will retry after 607.430128ms: waiting for domain to come up
	I0127 12:35:47.320403  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:47.320941  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:47.321006  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:47.320921  534929 retry.go:31] will retry after 772.973348ms: waiting for domain to come up
	I0127 12:35:46.359497  532344 pod_ready.go:93] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:46.359531  532344 pod_ready.go:82] duration metric: took 5.506181911s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:46.359547  532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.867744  532344 pod_ready.go:93] pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.867773  532344 pod_ready.go:82] duration metric: took 1.508215371s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.867785  532344 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.872748  532344 pod_ready.go:93] pod "etcd-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.872769  532344 pod_ready.go:82] duration metric: took 4.975217ms for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.872782  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.879135  532344 pod_ready.go:93] pod "kube-apiserver-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.879153  532344 pod_ready.go:82] duration metric: took 6.364009ms for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.879170  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.884792  532344 pod_ready.go:93] pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.884809  532344 pod_ready.go:82] duration metric: took 5.632068ms for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.884817  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.957535  532344 pod_ready.go:93] pod "kube-proxy-bbnm2" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.957564  532344 pod_ready.go:82] duration metric: took 72.739132ms for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.957577  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:48.358062  532344 pod_ready.go:93] pod "kube-scheduler-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:48.358087  532344 pod_ready.go:82] duration metric: took 400.502078ms for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:48.358095  532344 pod_ready.go:39] duration metric: took 7.515367235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:48.358124  532344 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:35:48.358180  532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:48.381657  532344 api_server.go:72] duration metric: took 7.799751759s to wait for apiserver process to appear ...
	I0127 12:35:48.381684  532344 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:35:48.381704  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:35:48.387590  532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 200:
	ok
	I0127 12:35:48.388765  532344 api_server.go:141] control plane version: v1.32.1
	I0127 12:35:48.388787  532344 api_server.go:131] duration metric: took 7.09706ms to wait for apiserver health ...
	I0127 12:35:48.388795  532344 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:35:48.560605  532344 system_pods.go:59] 9 kube-system pods found
	I0127 12:35:48.560642  532344 system_pods.go:61] "coredns-668d6bf9bc-v9stn" [011e6981-39d0-4fa1-bf1b-3d1e06c7c71a] Running
	I0127 12:35:48.560650  532344 system_pods.go:61] "coredns-668d6bf9bc-wwb9p" [0a034560-980a-40fb-9603-be18d02b6f05] Running
	I0127 12:35:48.560656  532344 system_pods.go:61] "etcd-no-preload-215237" [8b9ab7f2-224f-4373-9dc2-fa794a60d922] Running
	I0127 12:35:48.560659  532344 system_pods.go:61] "kube-apiserver-no-preload-215237" [064e0d8e-5d82-42bb-979d-cd0e9aa13f56] Running
	I0127 12:35:48.560663  532344 system_pods.go:61] "kube-controller-manager-no-preload-215237" [dd9c190f-c01e-4fa7-b033-57463b032d30] Running
	I0127 12:35:48.560666  532344 system_pods.go:61] "kube-proxy-bbnm2" [dd89ae69-6ad2-44cb-9c80-ba5529e22dc1] Running
	I0127 12:35:48.560671  532344 system_pods.go:61] "kube-scheduler-no-preload-215237" [41c25fba-7af8-4e0e-b96d-57be786d703c] Running
	I0127 12:35:48.560680  532344 system_pods.go:61] "metrics-server-f79f97bbb-lqck5" [3447c2da-cbb0-412c-a8d9-2be32c8e6dad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:35:48.560686  532344 system_pods.go:61] "storage-provisioner" [9627d136-2ecb-4cc3-969d-b62de2261147] Running
	I0127 12:35:48.560696  532344 system_pods.go:74] duration metric: took 171.894881ms to wait for pod list to return data ...
	I0127 12:35:48.560709  532344 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:35:48.760164  532344 default_sa.go:45] found service account: "default"
	I0127 12:35:48.760270  532344 default_sa.go:55] duration metric: took 199.548191ms for default service account to be created ...
	I0127 12:35:48.760295  532344 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:35:48.961828  532344 system_pods.go:87] 9 kube-system pods found
	I0127 12:35:48.846560  532607 pod_ready.go:82] duration metric: took 4m0.000837349s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:48.846588  532607 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 12:35:48.846609  532607 pod_ready.go:39] duration metric: took 4m15.043496386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:48.846642  532607 kubeadm.go:597] duration metric: took 4m22.373102966s to restartPrimaryControlPlane
	W0127 12:35:48.846704  532607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:35:48.846732  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 12:35:51.040149  532607 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.193395005s)
	I0127 12:35:51.040242  532607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:35:51.059048  532607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:35:51.071298  532607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:35:51.083050  532607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:35:51.083071  532607 kubeadm.go:157] found existing configuration files:
	
	I0127 12:35:51.083125  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:35:51.095124  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:35:51.095208  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:35:51.109222  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:35:51.120314  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:35:51.120390  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:35:51.129841  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:35:51.138490  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:35:51.138545  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:35:51.148658  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:35:51.157842  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:35:51.157894  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:35:51.167146  532607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:35:51.220576  532607 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:35:51.220796  532607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:35:51.342653  532607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:35:51.342830  532607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:35:51.343020  532607 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:35:51.348865  532607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:35:51.351235  532607 out.go:235]   - Generating certificates and keys ...
	I0127 12:35:51.351355  532607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:35:51.351445  532607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:35:51.351549  532607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:35:51.351635  532607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:35:51.351728  532607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:35:51.351801  532607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:35:51.351908  532607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:35:51.352000  532607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:35:51.352111  532607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:35:51.352262  532607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:35:51.352422  532607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:35:51.352546  532607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:35:51.416524  532607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:35:51.666997  532607 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:35:51.867237  532607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:35:52.007584  532607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:35:52.100986  532607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:35:52.101889  532607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:35:52.105806  532607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:35:52.107605  532607 out.go:235]   - Booting up control plane ...
	I0127 12:35:52.107745  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:35:52.108083  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:35:52.109913  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:35:52.146307  532607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:35:52.156130  532607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:35:52.156211  532607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:35:52.316523  532607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:35:52.316653  532607 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:35:48.711637  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:51.208760  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:48.096119  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:48.096791  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:48.096823  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:48.096728  534929 retry.go:31] will retry after 1.301268199s: waiting for domain to come up
	I0127 12:35:49.400077  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:49.400697  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:49.400729  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:49.400664  534929 retry.go:31] will retry after 1.62599798s: waiting for domain to come up
	I0127 12:35:51.029156  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:51.029715  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:51.029746  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:51.029706  534929 retry.go:31] will retry after 1.477748588s: waiting for domain to come up
	I0127 12:35:52.509484  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:52.510252  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:52.510299  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:52.510150  534929 retry.go:31] will retry after 1.875473187s: waiting for domain to come up
	I0127 12:35:53.322303  532607 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005635238s
	I0127 12:35:53.322436  532607 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:35:53.708069  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:56.209743  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:54.387170  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:54.387808  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:54.387840  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:54.387764  534929 retry.go:31] will retry after 2.219284161s: waiting for domain to come up
	I0127 12:35:56.609666  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:56.610140  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:56.610163  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:56.610112  534929 retry.go:31] will retry after 3.124115638s: waiting for domain to come up
	I0127 12:35:58.324673  532607 kubeadm.go:310] [api-check] The API server is healthy after 5.002577765s
	I0127 12:35:58.341207  532607 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:35:58.354763  532607 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:35:58.376218  532607 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:35:58.376468  532607 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-346100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:35:58.389424  532607 kubeadm.go:310] [bootstrap-token] Using token: 5069a0.5f3g1pdxhpmrcoga
	I0127 12:35:58.390773  532607 out.go:235]   - Configuring RBAC rules ...
	I0127 12:35:58.390901  532607 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:35:58.397069  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:35:58.405069  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:35:58.409291  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:35:58.412914  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:35:58.415499  532607 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:35:58.732028  532607 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:35:59.154936  532607 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:35:59.732670  532607 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:35:59.734653  532607 kubeadm.go:310] 
	I0127 12:35:59.734754  532607 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:35:59.734788  532607 kubeadm.go:310] 
	I0127 12:35:59.734919  532607 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:35:59.734933  532607 kubeadm.go:310] 
	I0127 12:35:59.734978  532607 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:35:59.735094  532607 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:35:59.735193  532607 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:35:59.735206  532607 kubeadm.go:310] 
	I0127 12:35:59.735295  532607 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:35:59.735316  532607 kubeadm.go:310] 
	I0127 12:35:59.735384  532607 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:35:59.735392  532607 kubeadm.go:310] 
	I0127 12:35:59.735463  532607 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:35:59.735570  532607 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:35:59.735692  532607 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:35:59.735707  532607 kubeadm.go:310] 
	I0127 12:35:59.735853  532607 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:35:59.735964  532607 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:35:59.735986  532607 kubeadm.go:310] 
	I0127 12:35:59.736104  532607 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5069a0.5f3g1pdxhpmrcoga \
	I0127 12:35:59.736265  532607 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
	I0127 12:35:59.736299  532607 kubeadm.go:310] 	--control-plane 
	I0127 12:35:59.736312  532607 kubeadm.go:310] 
	I0127 12:35:59.736432  532607 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:35:59.736441  532607 kubeadm.go:310] 
	I0127 12:35:59.736583  532607 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5069a0.5f3g1pdxhpmrcoga \
	I0127 12:35:59.736761  532607 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 
	I0127 12:35:59.738118  532607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:35:59.738152  532607 cni.go:84] Creating CNI manager for ""
	I0127 12:35:59.738162  532607 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:35:59.739901  532607 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:35:59.741063  532607 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:35:59.759536  532607 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:35:59.777178  532607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:35:59.777199  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:59.777236  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-346100 minikube.k8s.io/updated_at=2025_01_27T12_35_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=embed-certs-346100 minikube.k8s.io/primary=true
	I0127 12:35:59.974092  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:59.974117  532607 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:00.474716  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:00.974693  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:01.474216  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:01.974205  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:58.707466  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:01.206257  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:59.736004  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:59.736626  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:59.736649  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:59.736597  534929 retry.go:31] will retry after 3.849528984s: waiting for domain to come up
	I0127 12:36:02.475052  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:02.975120  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:03.474457  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:03.577041  532607 kubeadm.go:1113] duration metric: took 3.799909499s to wait for elevateKubeSystemPrivileges
	I0127 12:36:03.577092  532607 kubeadm.go:394] duration metric: took 4m37.171719699s to StartCluster
	I0127 12:36:03.577128  532607 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:03.577224  532607 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:03.579144  532607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:03.579423  532607 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.206 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:03.579505  532607 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:03.579620  532607 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-346100"
	I0127 12:36:03.579641  532607 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-346100"
	W0127 12:36:03.579650  532607 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:03.579651  532607 addons.go:69] Setting default-storageclass=true in profile "embed-certs-346100"
	I0127 12:36:03.579676  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.579688  532607 config.go:182] Loaded profile config "embed-certs-346100": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:03.579700  532607 addons.go:69] Setting dashboard=true in profile "embed-certs-346100"
	I0127 12:36:03.579723  532607 addons.go:238] Setting addon dashboard=true in "embed-certs-346100"
	I0127 12:36:03.579715  532607 addons.go:69] Setting metrics-server=true in profile "embed-certs-346100"
	W0127 12:36:03.579740  532607 addons.go:247] addon dashboard should already be in state true
	I0127 12:36:03.579694  532607 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-346100"
	I0127 12:36:03.579749  532607 addons.go:238] Setting addon metrics-server=true in "embed-certs-346100"
	W0127 12:36:03.579764  532607 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:03.579779  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.579800  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.580054  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580088  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580101  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580150  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580190  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580215  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580233  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580258  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.581024  532607 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:03.582429  532607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:03.598339  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0127 12:36:03.598375  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 12:36:03.598838  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.598892  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34331
	I0127 12:36:03.598919  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.599306  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.599470  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599486  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599497  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599511  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599722  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599738  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599912  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.599974  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.600223  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.600494  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600530  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600545  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600578  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600674  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600699  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600881  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0127 12:36:03.601524  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.602100  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.602116  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.602471  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.602687  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.606648  532607 addons.go:238] Setting addon default-storageclass=true in "embed-certs-346100"
	W0127 12:36:03.606677  532607 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:03.606709  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.607067  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.607104  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.619967  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0127 12:36:03.620348  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33043
	I0127 12:36:03.620623  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.620935  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.621427  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.621447  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.621789  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.621804  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.621998  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.622221  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.622273  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.622543  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.624486  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.624677  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.625420  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43503
	I0127 12:36:03.626112  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.626167  532607 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:03.626180  532607 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:03.626583  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.626602  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.626611  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0127 12:36:03.626942  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.627027  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.627437  532607 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:03.627453  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:03.627464  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.627467  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.627475  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.627504  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.627471  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.627836  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.628149  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.628561  532607 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:03.629535  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:03.629551  532607 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:03.629575  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.630434  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.631724  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.632213  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.632232  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.632448  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.632593  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.632682  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.632867  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.632996  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.633161  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.633189  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.633418  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.633573  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.633701  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.633812  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.634247  532607 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:03.635266  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:03.635284  532607 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:03.635305  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.637878  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.638306  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.638338  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.638542  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.638697  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.638867  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.639116  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.643537  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
	I0127 12:36:03.643881  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.644309  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.644327  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.644644  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.644952  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.646128  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.646325  532607 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:03.646341  532607 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:03.646358  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.649282  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.649641  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.649669  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.649910  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.650077  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.650198  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.650298  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.805663  532607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:03.824512  532607 node_ready.go:35] waiting up to 6m0s for node "embed-certs-346100" to be "Ready" ...
	I0127 12:36:03.856505  532607 node_ready.go:49] node "embed-certs-346100" has status "Ready":"True"
	I0127 12:36:03.856540  532607 node_ready.go:38] duration metric: took 31.977019ms for node "embed-certs-346100" to be "Ready" ...
	I0127 12:36:03.856555  532607 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:03.863683  532607 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:03.902624  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:03.925389  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:03.977654  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:03.977686  532607 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:04.012033  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:04.012063  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:04.029962  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:04.029991  532607 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:04.076532  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:04.076565  532607 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:04.136201  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:04.136229  532607 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:04.142268  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:04.142293  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:04.174895  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:04.174919  532607 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:04.185938  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:04.185959  532607 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:04.204606  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:04.226546  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:04.226574  532607 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:04.340411  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:04.340438  532607 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:04.424847  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.424878  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.425230  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:04.425269  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.425293  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.425304  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.425329  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.425596  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.425613  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.425627  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:04.443059  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.443080  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.443380  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.443404  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.457532  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:04.457557  532607 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:04.529771  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:04.529803  532607 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:04.581907  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:05.466462  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541011177s)
	I0127 12:36:05.466526  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.466544  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.466865  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.466934  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.466947  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.466957  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.466969  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.467283  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.467328  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.467300  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.677171  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.472522816s)
	I0127 12:36:05.677230  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.677244  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.677645  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.677684  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.677699  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.677711  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.677723  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.678056  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.678091  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.678115  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.678132  532607 addons.go:479] Verifying addon metrics-server=true in "embed-certs-346100"
	I0127 12:36:05.870203  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:06.503934  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.921960102s)
	I0127 12:36:06.504007  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:06.504025  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:06.504372  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:06.504489  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:06.504506  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:06.504514  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:06.504460  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:06.504814  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:06.504834  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:06.504835  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:06.506475  532607 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-346100 addons enable metrics-server
	
	I0127 12:36:06.507672  532607 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 12:36:06.508878  532607 addons.go:514] duration metric: took 2.929397312s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 12:36:03.587872  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.588437  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has current primary IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.588458  534894 main.go:141] libmachine: (newest-cni-610630) found domain IP: 192.168.39.228
	I0127 12:36:03.588471  534894 main.go:141] libmachine: (newest-cni-610630) reserving static IP address...
	I0127 12:36:03.589076  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "newest-cni-610630", mac: "52:54:00:49:61:34", ip: "192.168.39.228"} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.589105  534894 main.go:141] libmachine: (newest-cni-610630) reserved static IP address 192.168.39.228 for domain newest-cni-610630
	I0127 12:36:03.589131  534894 main.go:141] libmachine: (newest-cni-610630) DBG | skip adding static IP to network mk-newest-cni-610630 - found existing host DHCP lease matching {name: "newest-cni-610630", mac: "52:54:00:49:61:34", ip: "192.168.39.228"}
	I0127 12:36:03.589141  534894 main.go:141] libmachine: (newest-cni-610630) waiting for SSH...
	I0127 12:36:03.589165  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Getting to WaitForSSH function...
	I0127 12:36:03.592182  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.592771  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.592796  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.593171  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Using SSH client type: external
	I0127 12:36:03.593190  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa (-rw-------)
	I0127 12:36:03.593218  534894 main.go:141] libmachine: (newest-cni-610630) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:36:03.593228  534894 main.go:141] libmachine: (newest-cni-610630) DBG | About to run SSH command:
	I0127 12:36:03.593239  534894 main.go:141] libmachine: (newest-cni-610630) DBG | exit 0
	I0127 12:36:03.733183  534894 main.go:141] libmachine: (newest-cni-610630) DBG | SSH cmd err, output: <nil>: 
	I0127 12:36:03.733566  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetConfigRaw
	I0127 12:36:03.734338  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:03.737083  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.737511  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.737553  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.737875  534894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/config.json ...
	I0127 12:36:03.738075  534894 machine.go:93] provisionDockerMachine start ...
	I0127 12:36:03.738099  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:03.738370  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:03.741025  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.741354  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.741384  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.741566  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:03.741756  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.741966  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.742141  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:03.742356  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:03.742588  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:03.742604  534894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:36:03.853610  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:36:03.853641  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
	I0127 12:36:03.853921  534894 buildroot.go:166] provisioning hostname "newest-cni-610630"
	I0127 12:36:03.853957  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
	I0127 12:36:03.854185  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:03.857441  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.857928  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.857961  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.858074  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:03.858293  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.858504  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.858678  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:03.858886  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:03.859093  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:03.859120  534894 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-610630 && echo "newest-cni-610630" | sudo tee /etc/hostname
	I0127 12:36:03.986908  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-610630
	
	I0127 12:36:03.986946  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:03.990070  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.990587  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.990628  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.990879  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:03.991122  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.991299  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.991452  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:03.991678  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:03.991897  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:03.991926  534894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-610630' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-610630/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-610630' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:36:04.113288  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:36:04.113333  534894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-471120/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-471120/.minikube}
	I0127 12:36:04.113360  534894 buildroot.go:174] setting up certificates
	I0127 12:36:04.113382  534894 provision.go:84] configureAuth start
	I0127 12:36:04.113398  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
	I0127 12:36:04.113676  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:04.116365  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.116714  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.116764  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.117068  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.119378  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.119713  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.119736  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.119918  534894 provision.go:143] copyHostCerts
	I0127 12:36:04.119990  534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem, removing ...
	I0127 12:36:04.120016  534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem
	I0127 12:36:04.120102  534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem (1082 bytes)
	I0127 12:36:04.120256  534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem, removing ...
	I0127 12:36:04.120274  534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem
	I0127 12:36:04.120316  534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem (1123 bytes)
	I0127 12:36:04.120402  534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem, removing ...
	I0127 12:36:04.120415  534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem
	I0127 12:36:04.120457  534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem (1679 bytes)
	I0127 12:36:04.120535  534894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem org=jenkins.newest-cni-610630 san=[127.0.0.1 192.168.39.228 localhost minikube newest-cni-610630]
	I0127 12:36:04.308578  534894 provision.go:177] copyRemoteCerts
	I0127 12:36:04.308646  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:36:04.308681  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.311740  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.312147  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.312181  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.312367  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.312539  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.312718  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.312951  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.406421  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:36:04.434493  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:36:04.458820  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 12:36:04.483270  534894 provision.go:87] duration metric: took 369.872198ms to configureAuth
	I0127 12:36:04.483307  534894 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:36:04.483583  534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:04.483608  534894 machine.go:96] duration metric: took 745.518388ms to provisionDockerMachine
	I0127 12:36:04.483622  534894 start.go:293] postStartSetup for "newest-cni-610630" (driver="kvm2")
	I0127 12:36:04.483638  534894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:36:04.483676  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.484046  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:36:04.484091  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.487237  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.487689  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.487724  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.487930  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.488140  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.488365  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.488527  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.578283  534894 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:36:04.583274  534894 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:36:04.583302  534894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/addons for local assets ...
	I0127 12:36:04.583381  534894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/files for local assets ...
	I0127 12:36:04.583480  534894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem -> 4783872.pem in /etc/ssl/certs
	I0127 12:36:04.583597  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:36:04.594213  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:36:04.618506  534894 start.go:296] duration metric: took 134.861455ms for postStartSetup
	I0127 12:36:04.618569  534894 fix.go:56] duration metric: took 21.442212309s for fixHost
	I0127 12:36:04.618601  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.621910  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.622352  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.622388  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.622670  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.622872  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.623064  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.623231  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.623434  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:04.623683  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:04.623701  534894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:36:04.745637  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981364.720376969
	
	I0127 12:36:04.745668  534894 fix.go:216] guest clock: 1737981364.720376969
	I0127 12:36:04.745677  534894 fix.go:229] Guest: 2025-01-27 12:36:04.720376969 +0000 UTC Remote: 2025-01-27 12:36:04.618576525 +0000 UTC m=+21.609424923 (delta=101.800444ms)
	I0127 12:36:04.745704  534894 fix.go:200] guest clock delta is within tolerance: 101.800444ms
	I0127 12:36:04.745711  534894 start.go:83] releasing machines lock for "newest-cni-610630", held for 21.569374077s
	I0127 12:36:04.745742  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.746064  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:04.749116  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.749586  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.749623  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.749762  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.750369  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.750591  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.750714  534894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:36:04.750788  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.750841  534894 ssh_runner.go:195] Run: cat /version.json
	I0127 12:36:04.750872  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.753604  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.753937  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.753995  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.754036  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.754117  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.754283  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.754435  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.754463  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.754505  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.754649  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.754824  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.754704  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.754972  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.755165  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.837766  534894 ssh_runner.go:195] Run: systemctl --version
	I0127 12:36:04.870922  534894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:36:04.877067  534894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:36:04.877148  534894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:36:04.898288  534894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:36:04.898318  534894 start.go:495] detecting cgroup driver to use...
	I0127 12:36:04.898407  534894 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:36:04.932879  534894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:36:04.949987  534894 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:36:04.950133  534894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:36:04.967044  534894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:36:04.983091  534894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:36:05.124492  534894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:36:05.268901  534894 docker.go:233] disabling docker service ...
	I0127 12:36:05.268987  534894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:36:05.284320  534894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:36:05.298992  534894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:36:05.441228  534894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:36:05.609452  534894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:36:05.626916  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:36:05.647205  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:36:05.657704  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:36:05.667476  534894 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:36:05.667555  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:36:05.677468  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:36:05.688601  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:36:05.698702  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:36:05.710663  534894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:36:05.724221  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:36:05.737093  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:36:05.746742  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:36:05.756481  534894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:36:05.767282  534894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:36:05.767344  534894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:36:05.780026  534894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:36:05.791098  534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:05.930676  534894 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:36:05.966221  534894 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 12:36:05.966321  534894 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:36:05.971094  534894 retry.go:31] will retry after 1.421722911s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 12:36:07.393037  534894 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:36:07.398456  534894 start.go:563] Will wait 60s for crictl version
	I0127 12:36:07.398530  534894 ssh_runner.go:195] Run: which crictl
	I0127 12:36:07.402351  534894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:36:07.446224  534894 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 12:36:07.446301  534894 ssh_runner.go:195] Run: containerd --version
	I0127 12:36:07.473080  534894 ssh_runner.go:195] Run: containerd --version
	I0127 12:36:07.497663  534894 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 12:36:07.498857  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:07.501622  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:07.502032  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:07.502071  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:07.502274  534894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 12:36:07.506028  534894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:36:07.519964  534894 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 12:36:03.206663  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:05.207472  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:07.706605  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:07.521255  534894 kubeadm.go:883] updating cluster {Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:36:07.521413  534894 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:36:07.521493  534894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:36:07.554098  534894 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:36:07.554125  534894 containerd.go:534] Images already preloaded, skipping extraction
	I0127 12:36:07.554187  534894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:36:07.591861  534894 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:36:07.591890  534894 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:36:07.591901  534894 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.32.1 containerd true true} ...
	I0127 12:36:07.592033  534894 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-610630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:36:07.592107  534894 ssh_runner.go:195] Run: sudo crictl info
	I0127 12:36:07.633013  534894 cni.go:84] Creating CNI manager for ""
	I0127 12:36:07.633040  534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:36:07.633051  534894 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 12:36:07.633082  534894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-610630 NodeName:newest-cni-610630 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:36:07.633263  534894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-610630"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.228"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:36:07.633336  534894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:36:07.643906  534894 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:36:07.643972  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:36:07.653399  534894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0127 12:36:07.671016  534894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:36:07.691229  534894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2311 bytes)
	I0127 12:36:07.711891  534894 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I0127 12:36:07.716614  534894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:36:07.730520  534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:07.852685  534894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:07.870469  534894 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630 for IP: 192.168.39.228
	I0127 12:36:07.870498  534894 certs.go:194] generating shared ca certs ...
	I0127 12:36:07.870523  534894 certs.go:226] acquiring lock for ca certs: {Name:mk02d117412837bd489768267e2b174e6c3ff6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:07.870697  534894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key
	I0127 12:36:07.870773  534894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key
	I0127 12:36:07.870785  534894 certs.go:256] generating profile certs ...
	I0127 12:36:07.870943  534894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/client.key
	I0127 12:36:07.871073  534894 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.key.2ce4e80e
	I0127 12:36:07.871140  534894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.key
	I0127 12:36:07.871291  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem (1338 bytes)
	W0127 12:36:07.871334  534894 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387_empty.pem, impossibly tiny 0 bytes
	I0127 12:36:07.871349  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:36:07.871394  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:36:07.871429  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:36:07.871461  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem (1679 bytes)
	I0127 12:36:07.871519  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:36:07.872415  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:36:07.904294  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:36:07.944289  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:36:07.979498  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:36:08.010836  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 12:36:08.041389  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:36:08.201622  532844 pod_ready.go:82] duration metric: took 4m0.001032286s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" ...
	E0127 12:36:08.201658  532844 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 12:36:08.201683  532844 pod_ready.go:39] duration metric: took 4m14.040174083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:08.201724  532844 kubeadm.go:597] duration metric: took 4m21.555444284s to restartPrimaryControlPlane
	W0127 12:36:08.201798  532844 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:36:08.201833  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 12:36:10.133466  532844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.93160232s)
	I0127 12:36:10.133550  532844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:36:10.155296  532844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:36:10.170023  532844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:36:10.183165  532844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:36:10.183194  532844 kubeadm.go:157] found existing configuration files:
	
	I0127 12:36:10.183257  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 12:36:10.195175  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:36:10.195253  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:36:10.208349  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 12:36:10.220351  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:36:10.220429  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:36:10.238914  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 12:36:10.254995  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:36:10.255067  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:36:10.266753  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 12:36:10.278422  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:36:10.278490  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:36:10.292279  532844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:36:10.351007  532844 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:36:10.351189  532844 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:36:10.469769  532844 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:36:10.469949  532844 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:36:10.470056  532844 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:36:10.479353  532844 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:36:10.481858  532844 out.go:235]   - Generating certificates and keys ...
	I0127 12:36:10.481959  532844 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:36:10.482038  532844 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:36:10.482135  532844 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:36:10.482236  532844 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:36:10.482358  532844 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:36:10.482442  532844 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:36:10.482525  532844 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:36:10.482633  532844 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:36:10.483039  532844 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:36:10.483619  532844 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:36:10.483746  532844 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:36:10.483829  532844 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:36:10.585561  532844 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:36:10.784195  532844 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:36:10.958020  532844 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:36:11.223196  532844 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:36:11.439416  532844 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:36:11.440271  532844 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:36:11.444236  532844 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:36:08.374973  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:10.872073  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:11.445766  532844 out.go:235]   - Booting up control plane ...
	I0127 12:36:11.445895  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:36:11.445993  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:36:11.447764  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:36:11.484418  532844 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:36:11.496508  532844 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:36:11.496594  532844 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:36:11.681886  532844 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:36:11.682039  532844 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:36:12.183183  532844 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.076889ms
	I0127 12:36:12.183305  532844 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:36:08.074441  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:36:08.107699  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:36:08.137950  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem --> /usr/share/ca-certificates/478387.pem (1338 bytes)
	I0127 12:36:08.163896  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /usr/share/ca-certificates/4783872.pem (1708 bytes)
	I0127 12:36:08.188493  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:36:08.217196  534894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:36:08.237633  534894 ssh_runner.go:195] Run: openssl version
	I0127 12:36:08.244270  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478387.pem && ln -fs /usr/share/ca-certificates/478387.pem /etc/ssl/certs/478387.pem"
	I0127 12:36:08.258544  534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478387.pem
	I0127 12:36:08.264117  534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:28 /usr/share/ca-certificates/478387.pem
	I0127 12:36:08.264194  534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478387.pem
	I0127 12:36:08.271823  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478387.pem /etc/ssl/certs/51391683.0"
	I0127 12:36:08.283160  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783872.pem && ln -fs /usr/share/ca-certificates/4783872.pem /etc/ssl/certs/4783872.pem"
	I0127 12:36:08.293600  534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783872.pem
	I0127 12:36:08.299046  534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:28 /usr/share/ca-certificates/4783872.pem
	I0127 12:36:08.299115  534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783872.pem
	I0127 12:36:08.306015  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783872.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:36:08.317692  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:36:08.328317  534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:36:08.332856  534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:36:08.332912  534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:36:08.342875  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:36:08.355240  534894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:36:08.363234  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:36:08.369655  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:36:08.377149  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:36:08.382739  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:36:08.388277  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:36:08.395644  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:36:08.403226  534894 kubeadm.go:392] StartCluster: {Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:36:08.403325  534894 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 12:36:08.403369  534894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:36:08.454071  534894 cri.go:89] found id: "05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2"
	I0127 12:36:08.454100  534894 cri.go:89] found id: "a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250"
	I0127 12:36:08.454104  534894 cri.go:89] found id: "357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271"
	I0127 12:36:08.454108  534894 cri.go:89] found id: "8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b"
	I0127 12:36:08.454118  534894 cri.go:89] found id: "631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a"
	I0127 12:36:08.454123  534894 cri.go:89] found id: "3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13"
	I0127 12:36:08.454127  534894 cri.go:89] found id: "ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76"
	I0127 12:36:08.454130  534894 cri.go:89] found id: "0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c"
	I0127 12:36:08.454134  534894 cri.go:89] found id: ""
	I0127 12:36:08.454198  534894 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 12:36:08.472428  534894 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T12:36:08Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 12:36:08.472525  534894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:36:08.484156  534894 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:36:08.484183  534894 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:36:08.484255  534894 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:36:08.494975  534894 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:36:08.496360  534894 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-610630" does not appear in /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:08.497417  534894 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-471120/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-610630" cluster setting kubeconfig missing "newest-cni-610630" context setting]
	I0127 12:36:08.498843  534894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:08.501415  534894 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:36:08.513111  534894 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.228
	I0127 12:36:08.513147  534894 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:36:08.513163  534894 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 12:36:08.513216  534894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:36:08.561176  534894 cri.go:89] found id: "05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2"
	I0127 12:36:08.561203  534894 cri.go:89] found id: "a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250"
	I0127 12:36:08.561209  534894 cri.go:89] found id: "357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271"
	I0127 12:36:08.561214  534894 cri.go:89] found id: "8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b"
	I0127 12:36:08.561218  534894 cri.go:89] found id: "631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a"
	I0127 12:36:08.561223  534894 cri.go:89] found id: "3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13"
	I0127 12:36:08.561227  534894 cri.go:89] found id: "ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76"
	I0127 12:36:08.561231  534894 cri.go:89] found id: "0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c"
	I0127 12:36:08.561235  534894 cri.go:89] found id: ""
	I0127 12:36:08.561242  534894 cri.go:252] Stopping containers: [05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2 a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250 357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271 8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b 631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a 3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13 ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76 0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c]
	I0127 12:36:08.561301  534894 ssh_runner.go:195] Run: which crictl
	I0127 12:36:08.565588  534894 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2 a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250 357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271 8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b 631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a 3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13 ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76 0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c
	I0127 12:36:08.619372  534894 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:36:08.636553  534894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:36:08.648359  534894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:36:08.648385  534894 kubeadm.go:157] found existing configuration files:
	
	I0127 12:36:08.648439  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:36:08.659186  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:36:08.659257  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:36:08.668828  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:36:08.679551  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:36:08.679624  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:36:08.689530  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:36:08.701111  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:36:08.701164  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:36:08.709830  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:36:08.718407  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:36:08.718495  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:36:08.727400  534894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:36:08.736296  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:08.887779  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:09.818917  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:10.080535  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:10.159744  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:10.232154  534894 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:10.232252  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:10.732454  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:11.233357  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:11.264081  534894 api_server.go:72] duration metric: took 1.031921463s to wait for apiserver process to appear ...
	I0127 12:36:11.264115  534894 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:11.264142  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:11.264724  534894 api_server.go:269] stopped: https://192.168.39.228:8443/healthz: Get "https://192.168.39.228:8443/healthz": dial tcp 192.168.39.228:8443: connect: connection refused
	I0127 12:36:11.764442  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:14.358365  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:36:14.358472  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:36:14.358502  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:14.408913  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:36:14.409034  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:36:14.764463  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:14.771512  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:36:14.771584  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:36:15.264813  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:15.270318  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:36:15.270344  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:36:15.765063  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:15.772704  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:36:15.772774  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:36:16.264285  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:16.271130  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0127 12:36:16.281041  534894 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:16.281071  534894 api_server.go:131] duration metric: took 5.016947638s to wait for apiserver health ...
	I0127 12:36:16.281087  534894 cni.go:84] Creating CNI manager for ""
	I0127 12:36:16.281096  534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:36:16.282806  534894 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:36:16.284232  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:36:16.297533  534894 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:36:16.314501  534894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:16.324319  534894 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:16.324349  534894 system_pods.go:61] "coredns-668d6bf9bc-n6hwn" [24d3582e-97d0-4bb8-b12a-6f69ecd72309] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:16.324357  534894 system_pods.go:61] "coredns-668d6bf9bc-vg4bb" [3d20d4a5-8ddf-4166-af63-47beab76d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:16.324365  534894 system_pods.go:61] "etcd-newest-cni-610630" [0a812f8b-1e38-49a2-be17-9db3fb1979db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:36:16.324379  534894 system_pods.go:61] "kube-apiserver-newest-cni-610630" [292fb9b9-ccfb-49b5-892d-078e5897981d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:36:16.324385  534894 system_pods.go:61] "kube-controller-manager-newest-cni-610630" [36d1c2e6-42ce-4d4a-8bba-ff7beb6551ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:36:16.324391  534894 system_pods.go:61] "kube-proxy-8szpt" [11487c0b-6afa-4c83-9eb8-6a9f609f7b58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 12:36:16.324395  534894 system_pods.go:61] "kube-scheduler-newest-cni-610630" [2f8744a9-54dc-4b5c-92f6-b0d3b9b0de7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:36:16.324400  534894 system_pods.go:61] "metrics-server-f79f97bbb-kcc5g" [6593df15-330d-4389-b878-45d396d718b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:16.324408  534894 system_pods.go:61] "storage-provisioner" [3cc7604e-fcbb-48e5-8445-82d5150b759f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:36:16.324413  534894 system_pods.go:74] duration metric: took 9.892595ms to wait for pod list to return data ...
	I0127 12:36:16.324424  534894 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:36:16.327339  534894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:36:16.327364  534894 node_conditions.go:123] node cpu capacity is 2
	I0127 12:36:16.327385  534894 node_conditions.go:105] duration metric: took 2.956884ms to run NodePressure ...
	I0127 12:36:16.327404  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:16.991253  534894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:36:17.011999  534894 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:17.012027  534894 kubeadm.go:597] duration metric: took 8.527837095s to restartPrimaryControlPlane
	I0127 12:36:17.012040  534894 kubeadm.go:394] duration metric: took 8.608822701s to StartCluster
	I0127 12:36:17.012072  534894 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:17.012204  534894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:17.014682  534894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:17.015030  534894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:17.015158  534894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:17.015477  534894 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-610630"
	I0127 12:36:17.015505  534894 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-610630"
	I0127 12:36:17.015320  534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:17.015542  534894 addons.go:69] Setting metrics-server=true in profile "newest-cni-610630"
	I0127 12:36:17.015555  534894 addons.go:238] Setting addon metrics-server=true in "newest-cni-610630"
	W0127 12:36:17.015562  534894 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:17.015556  534894 addons.go:69] Setting default-storageclass=true in profile "newest-cni-610630"
	I0127 12:36:17.015582  534894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-610630"
	I0127 12:36:17.015588  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.015521  534894 addons.go:69] Setting dashboard=true in profile "newest-cni-610630"
	I0127 12:36:17.015608  534894 addons.go:238] Setting addon dashboard=true in "newest-cni-610630"
	W0127 12:36:17.015617  534894 addons.go:247] addon dashboard should already be in state true
	I0127 12:36:17.015643  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.016040  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.016039  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.016050  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	W0127 12:36:17.015533  534894 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:17.016079  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.016082  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.016083  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.016420  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.016423  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.016450  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.031224  534894 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:17.032914  534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:17.036836  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0127 12:36:17.037340  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.037862  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.037882  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.038318  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.038866  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.038905  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.039846  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0127 12:36:17.040182  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.040873  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.040890  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.041292  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.041587  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.045301  534894 addons.go:238] Setting addon default-storageclass=true in "newest-cni-610630"
	W0127 12:36:17.045320  534894 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:17.045352  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.045759  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.045799  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.048089  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0127 12:36:17.048729  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.049195  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.049213  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.049644  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.050180  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.050222  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.050700  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0127 12:36:17.051087  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.051560  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.051581  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.051971  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.052563  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.052600  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.065040  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0127 12:36:17.065537  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.066047  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.066072  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.066400  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.066556  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.068438  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.070276  534894 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:17.071684  534894 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:17.072821  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:17.072844  534894 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:17.072867  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.073985  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0127 12:36:17.074526  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.075082  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.075099  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.075677  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.076310  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.076356  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.078889  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.079441  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.079463  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.079747  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.079954  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.080136  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.080333  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.091530  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I0127 12:36:17.092126  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.092669  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.092694  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.093285  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.093437  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.095189  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I0127 12:36:17.095304  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.095761  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.096341  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.096358  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.096828  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.097030  534894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:17.097195  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.097833  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40641
	I0127 12:36:17.098239  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.098254  534894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:17.098271  534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:17.098299  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.098871  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.098889  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.099255  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.099465  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.099541  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.100856  534894 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:12.874242  532607 pod_ready.go:93] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.874282  532607 pod_ready.go:82] duration metric: took 9.010574512s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.874303  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.882689  532607 pod_ready.go:93] pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.882775  532607 pod_ready.go:82] duration metric: took 8.462495ms for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.882801  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.888659  532607 pod_ready.go:93] pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.888693  532607 pod_ready.go:82] duration metric: took 5.874272ms for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.888707  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-smp6l" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.894080  532607 pod_ready.go:93] pod "kube-proxy-smp6l" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.894141  532607 pod_ready.go:82] duration metric: took 5.425838ms for pod "kube-proxy-smp6l" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.894163  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.900793  532607 pod_ready.go:93] pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.900849  532607 pod_ready.go:82] duration metric: took 6.668808ms for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.900869  532607 pod_ready.go:39] duration metric: took 9.044300135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:12.900904  532607 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:12.900995  532607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:12.922995  532607 api_server.go:72] duration metric: took 9.343524429s to wait for apiserver process to appear ...
	I0127 12:36:12.923066  532607 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:12.923097  532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
	I0127 12:36:12.930234  532607 api_server.go:279] https://192.168.50.206:8443/healthz returned 200:
	ok
	I0127 12:36:12.931482  532607 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:12.931504  532607 api_server.go:131] duration metric: took 8.421115ms to wait for apiserver health ...
	I0127 12:36:12.931513  532607 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:13.073659  532607 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:13.073701  532607 system_pods.go:61] "coredns-668d6bf9bc-46nfk" [ca146154-7693-43e5-ae2a-f0c3148327b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:13.073712  532607 system_pods.go:61] "coredns-668d6bf9bc-9p64b" [4d44d79e-ea3d-4085-9fb2-356746e71e9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:13.073722  532607 system_pods.go:61] "etcd-embed-certs-346100" [cb00782a-b078-43ee-aa3f-4806aa7629d6] Running
	I0127 12:36:13.073729  532607 system_pods.go:61] "kube-apiserver-embed-certs-346100" [7b0a8d77-4737-4bde-8e2a-2462c524f9a2] Running
	I0127 12:36:13.073735  532607 system_pods.go:61] "kube-controller-manager-embed-certs-346100" [196254b2-812b-43a4-ae10-d55a11957faf] Running
	I0127 12:36:13.073741  532607 system_pods.go:61] "kube-proxy-smp6l" [886c9cd4-795b-4e33-a16e-e12302c37665] Running
	I0127 12:36:13.073746  532607 system_pods.go:61] "kube-scheduler-embed-certs-346100" [90cbc1fe-52a3-45d8-a8e9-edc60f5c4829] Running
	I0127 12:36:13.073754  532607 system_pods.go:61] "metrics-server-f79f97bbb-w8fsn" [3a78ab43-37b0-4dc0-89a9-59a558ef997c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:13.073811  532607 system_pods.go:61] "storage-provisioner" [0d021617-8412-4f33-ba4f-2b3b458721ff] Running
	I0127 12:36:13.073828  532607 system_pods.go:74] duration metric: took 142.306493ms to wait for pod list to return data ...
	I0127 12:36:13.073848  532607 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:13.273298  532607 default_sa.go:45] found service account: "default"
	I0127 12:36:13.273415  532607 default_sa.go:55] duration metric: took 199.555226ms for default service account to be created ...
	I0127 12:36:13.273446  532607 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:36:13.477525  532607 system_pods.go:87] 9 kube-system pods found
	I0127 12:36:17.101529  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.101719  534894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:17.101731  534894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:17.101745  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.102276  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:17.102295  534894 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:17.102329  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.102718  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.103291  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.103308  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.103462  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.103607  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.103729  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.103834  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.106885  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.107336  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.107361  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.107579  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.107585  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.107768  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.107957  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.108065  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.108184  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.108305  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.108457  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.108478  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.108587  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.108674  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.319272  534894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:17.355389  534894 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:17.355483  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:17.383883  534894 api_server.go:72] duration metric: took 368.528555ms to wait for apiserver process to appear ...
	I0127 12:36:17.383915  534894 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:17.383940  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:17.392047  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0127 12:36:17.393460  534894 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:17.393491  534894 api_server.go:131] duration metric: took 9.56764ms to wait for apiserver health ...
	I0127 12:36:17.393503  534894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:17.419483  534894 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:17.419523  534894 system_pods.go:61] "coredns-668d6bf9bc-n6hwn" [24d3582e-97d0-4bb8-b12a-6f69ecd72309] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:17.419533  534894 system_pods.go:61] "coredns-668d6bf9bc-vg4bb" [3d20d4a5-8ddf-4166-af63-47beab76d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:17.419543  534894 system_pods.go:61] "etcd-newest-cni-610630" [0a812f8b-1e38-49a2-be17-9db3fb1979db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:36:17.419550  534894 system_pods.go:61] "kube-apiserver-newest-cni-610630" [292fb9b9-ccfb-49b5-892d-078e5897981d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:36:17.419559  534894 system_pods.go:61] "kube-controller-manager-newest-cni-610630" [36d1c2e6-42ce-4d4a-8bba-ff7beb6551ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:36:17.419565  534894 system_pods.go:61] "kube-proxy-8szpt" [11487c0b-6afa-4c83-9eb8-6a9f609f7b58] Running
	I0127 12:36:17.419574  534894 system_pods.go:61] "kube-scheduler-newest-cni-610630" [2f8744a9-54dc-4b5c-92f6-b0d3b9b0de7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:36:17.419582  534894 system_pods.go:61] "metrics-server-f79f97bbb-kcc5g" [6593df15-330d-4389-b878-45d396d718b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:17.419591  534894 system_pods.go:61] "storage-provisioner" [3cc7604e-fcbb-48e5-8445-82d5150b759f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:36:17.419601  534894 system_pods.go:74] duration metric: took 26.090469ms to wait for pod list to return data ...
	I0127 12:36:17.419614  534894 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:17.422917  534894 default_sa.go:45] found service account: "default"
	I0127 12:36:17.422941  534894 default_sa.go:55] duration metric: took 3.317044ms for default service account to be created ...
	I0127 12:36:17.422956  534894 kubeadm.go:582] duration metric: took 407.606907ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 12:36:17.422975  534894 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:36:17.429059  534894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:36:17.429091  534894 node_conditions.go:123] node cpu capacity is 2
	I0127 12:36:17.429116  534894 node_conditions.go:105] duration metric: took 6.133766ms to run NodePressure ...
	I0127 12:36:17.429138  534894 start.go:241] waiting for startup goroutines ...
	I0127 12:36:17.493751  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:17.493777  534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:17.496271  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:17.540289  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:17.540321  534894 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:17.595530  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:17.595565  534894 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:17.609027  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:17.609055  534894 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:17.726024  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:17.764459  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:17.764492  534894 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:17.764569  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:17.852391  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:17.852429  534894 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:17.964392  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:17.964417  534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:18.185418  532844 kubeadm.go:310] [api-check] The API server is healthy after 6.002059282s
	I0127 12:36:18.204454  532844 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:36:18.218201  532844 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:36:18.245054  532844 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:36:18.245331  532844 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-887672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:36:18.257186  532844 kubeadm.go:310] [bootstrap-token] Using token: 5yhtlj.kyb5uzy41lrz34us
	I0127 12:36:18.258581  532844 out.go:235]   - Configuring RBAC rules ...
	I0127 12:36:18.258747  532844 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:36:18.265191  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:36:18.272296  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:36:18.285037  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:36:18.285204  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:36:18.285313  532844 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:36:18.593364  532844 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:36:19.042942  532844 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:36:19.593432  532844 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:36:19.594797  532844 kubeadm.go:310] 
	I0127 12:36:19.594875  532844 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:36:19.594888  532844 kubeadm.go:310] 
	I0127 12:36:19.594970  532844 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:36:19.594981  532844 kubeadm.go:310] 
	I0127 12:36:19.595011  532844 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:36:19.595081  532844 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:36:19.595152  532844 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:36:19.595166  532844 kubeadm.go:310] 
	I0127 12:36:19.595239  532844 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:36:19.595246  532844 kubeadm.go:310] 
	I0127 12:36:19.595301  532844 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:36:19.595308  532844 kubeadm.go:310] 
	I0127 12:36:19.595371  532844 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:36:19.595464  532844 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:36:19.595545  532844 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:36:19.595554  532844 kubeadm.go:310] 
	I0127 12:36:19.595667  532844 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:36:19.595757  532844 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:36:19.595767  532844 kubeadm.go:310] 
	I0127 12:36:19.595869  532844 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5yhtlj.kyb5uzy41lrz34us \
	I0127 12:36:19.595998  532844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
	I0127 12:36:19.596017  532844 kubeadm.go:310] 	--control-plane 
	I0127 12:36:19.596021  532844 kubeadm.go:310] 
	I0127 12:36:19.596121  532844 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:36:19.596137  532844 kubeadm.go:310] 
	I0127 12:36:19.596223  532844 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5yhtlj.kyb5uzy41lrz34us \
	I0127 12:36:19.596305  532844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 
	I0127 12:36:19.598645  532844 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:36:19.598687  532844 cni.go:84] Creating CNI manager for ""
	I0127 12:36:19.598696  532844 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:36:19.600188  532844 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:36:18.113709  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:18.113742  534894 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:18.153599  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:18.153635  534894 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:18.176500  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:18.176539  534894 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:18.216973  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:18.217007  534894 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:18.274511  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:18.274583  534894 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:18.342333  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:18.361302  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:18.361342  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:18.361665  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:18.361699  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:18.361710  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:18.361719  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:18.362117  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:18.362140  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:18.362144  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:18.371041  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:18.371065  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:18.371339  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:18.371377  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.594328  534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.868263184s)
	I0127 12:36:19.594692  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.594482  534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.829887156s)
	I0127 12:36:19.594790  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.594804  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.595140  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:19.595208  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.595219  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.595238  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.595247  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.595556  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:19.595579  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.595600  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.595618  534894 addons.go:479] Verifying addon metrics-server=true in "newest-cni-610630"
	I0127 12:36:19.596388  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.596722  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.596754  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.596763  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.596770  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.597063  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:19.597086  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.597098  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:20.095246  534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.752863121s)
	I0127 12:36:20.095306  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:20.095324  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:20.095623  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:20.095685  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:20.095695  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:20.095711  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:20.095721  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:20.096021  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:20.096038  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:20.096055  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:20.097482  534894 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-610630 addons enable metrics-server
	
	I0127 12:36:20.098730  534894 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0127 12:36:20.099860  534894 addons.go:514] duration metric: took 3.084737287s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0127 12:36:20.099913  534894 start.go:246] waiting for cluster config update ...
	I0127 12:36:20.099934  534894 start.go:255] writing updated cluster config ...
	I0127 12:36:20.100260  534894 ssh_runner.go:195] Run: rm -f paused
	I0127 12:36:20.153018  534894 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:36:20.154413  534894 out.go:177] * Done! kubectl is now configured to use "newest-cni-610630" cluster and "default" namespace by default
	I0127 12:36:19.601391  532844 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:36:19.615483  532844 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:36:19.641045  532844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:36:19.641123  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:19.641161  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-887672 minikube.k8s.io/updated_at=2025_01_27T12_36_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=default-k8s-diff-port-887672 minikube.k8s.io/primary=true
	I0127 12:36:19.655315  532844 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:19.893685  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:20.394472  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:20.893933  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:21.394823  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:21.893992  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:22.393950  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:22.894084  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:23.394506  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:23.893909  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:24.393790  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:24.491305  532844 kubeadm.go:1113] duration metric: took 4.850249048s to wait for elevateKubeSystemPrivileges
	I0127 12:36:24.491356  532844 kubeadm.go:394] duration metric: took 4m37.901720321s to StartCluster
	I0127 12:36:24.491385  532844 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:24.491488  532844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:24.493752  532844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:24.494040  532844 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.130 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:24.494175  532844 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:24.494273  532844 config.go:182] Loaded profile config "default-k8s-diff-port-887672": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:24.494285  532844 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494323  532844 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-887672"
	I0127 12:36:24.494316  532844 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494338  532844 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494372  532844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-887672"
	I0127 12:36:24.494381  532844 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.494394  532844 addons.go:247] addon dashboard should already be in state true
	W0127 12:36:24.494332  532844 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:24.494432  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494463  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494323  532844 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494553  532844 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.494564  532844 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:24.494598  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494863  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494871  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494871  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494905  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.494911  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495037  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.495049  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495123  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495481  532844 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:24.496811  532844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:24.513577  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0127 12:36:24.514115  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.514694  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.514720  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.515161  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.515484  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
	I0127 12:36:24.515836  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0127 12:36:24.515999  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0127 12:36:24.516094  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.516144  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.516192  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.516413  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.516675  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.516695  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.516974  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.516994  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.517001  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.517393  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.517583  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.517647  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.518197  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.518252  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.518469  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.518494  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.518868  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.519422  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.519470  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.521629  532844 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.521653  532844 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:24.521684  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.522040  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.522081  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.534712  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0127 12:36:24.535195  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.536504  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.536527  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.536554  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0127 12:36:24.536902  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.536959  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.537111  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.537597  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.537616  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.537969  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.538145  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.538989  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0127 12:36:24.539580  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.540009  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0127 12:36:24.540196  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.540422  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.540715  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.540879  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.540902  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.540934  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.540948  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.541341  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.541388  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.541685  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.542042  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.542090  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.542251  532844 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:24.542373  532844 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:24.543206  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.543412  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:24.543430  532844 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:24.543460  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.544493  532844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:24.545545  532844 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:24.545643  532844 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:24.545656  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:24.545671  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.546541  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:24.546563  532844 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:24.546584  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.547093  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.547276  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.547478  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.547900  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.548065  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.547944  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.548278  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.549918  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550146  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.550170  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550429  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.550517  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550608  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.550758  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.550914  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.550956  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.550993  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.551165  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.551308  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.551460  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.551595  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.566621  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0127 12:36:24.567007  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.567434  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.567460  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.567879  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.568040  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.569632  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.569844  532844 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:24.569859  532844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:24.569875  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.572937  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.573361  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.573377  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.573577  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.573757  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.573888  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.574044  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.747290  532844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:24.779846  532844 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-887672" to be "Ready" ...
	I0127 12:36:24.813551  532844 node_ready.go:49] node "default-k8s-diff-port-887672" has status "Ready":"True"
	I0127 12:36:24.813582  532844 node_ready.go:38] duration metric: took 33.68566ms for node "default-k8s-diff-port-887672" to be "Ready" ...
	I0127 12:36:24.813594  532844 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:24.825398  532844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:24.855841  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:24.855869  532844 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:24.865288  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:24.890399  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:24.907963  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:24.907990  532844 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:24.923409  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:24.923434  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:24.967186  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:24.967211  532844 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:25.003133  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:25.003167  532844 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:25.031491  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:25.031515  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:25.086171  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:25.086201  532844 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:25.147825  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:25.152298  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:25.152324  532844 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:25.203235  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:25.203264  532844 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:25.242547  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:25.242578  532844 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:25.281622  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:25.281659  532844 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:25.312416  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:25.312444  532844 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:25.365802  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:25.651534  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.651566  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.651590  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.651612  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.651995  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652009  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652020  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652021  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652033  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.652036  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652040  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.652047  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652055  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.652063  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.652511  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652572  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652594  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652580  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652592  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652796  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.667377  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.667403  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.667693  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.667709  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.974214  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.974246  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.974553  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.974574  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.974591  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.974600  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.974992  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.975017  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.975032  532844 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-887672"
	I0127 12:36:26.960702  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:27.097489  532844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.731632212s)
	I0127 12:36:27.097551  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:27.097567  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:27.097886  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:27.097909  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:27.097909  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:27.097917  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:27.097935  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:27.098221  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:27.098291  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:27.099837  532844 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-887672 addons enable metrics-server
	
	I0127 12:36:27.101354  532844 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 12:36:27.102395  532844 addons.go:514] duration metric: took 2.608238219s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 12:36:29.331790  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:31.334726  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:33.834237  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:34.374688  532844 pod_ready.go:93] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.374713  532844 pod_ready.go:82] duration metric: took 9.549290033s for pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.374725  532844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.399299  532844 pod_ready.go:93] pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.399323  532844 pod_ready.go:82] duration metric: took 24.589743ms for pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.399332  532844 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.421329  532844 pod_ready.go:93] pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.421359  532844 pod_ready.go:82] duration metric: took 22.019877ms for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.421399  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.427922  532844 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.427946  532844 pod_ready.go:82] duration metric: took 6.537775ms for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.427957  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.447675  532844 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.447701  532844 pod_ready.go:82] duration metric: took 19.736139ms for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.447713  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xl46c" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.729783  532844 pod_ready.go:93] pod "kube-proxy-xl46c" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.729827  532844 pod_ready.go:82] duration metric: took 282.092476ms for pod "kube-proxy-xl46c" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.729841  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:35.128755  532844 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:35.128781  532844 pod_ready.go:82] duration metric: took 398.931642ms for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:35.128790  532844 pod_ready.go:39] duration metric: took 10.315186396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:35.128806  532844 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:35.128870  532844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:35.148548  532844 api_server.go:72] duration metric: took 10.654456335s to wait for apiserver process to appear ...
	I0127 12:36:35.148574  532844 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:35.148597  532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
	I0127 12:36:35.156175  532844 api_server.go:279] https://192.168.61.130:8444/healthz returned 200:
	ok
	I0127 12:36:35.157842  532844 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:35.157866  532844 api_server.go:131] duration metric: took 9.283401ms to wait for apiserver health ...
	I0127 12:36:35.157875  532844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:35.339567  532844 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:35.339606  532844 system_pods.go:61] "coredns-668d6bf9bc-jc882" [cc7b1851-f0b2-406d-b972-155b02dcefc6] Running
	I0127 12:36:35.339614  532844 system_pods.go:61] "coredns-668d6bf9bc-s6rln" [553e1b5c-1bb3-48f4-bf25-6837dae6b627] Running
	I0127 12:36:35.339620  532844 system_pods.go:61] "etcd-default-k8s-diff-port-887672" [cfe71b01-c4c5-4772-904f-0f22ebdc9481] Running
	I0127 12:36:35.339625  532844 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-887672" [09952f8b-2235-45c2-aac8-328369a341dd] Running
	I0127 12:36:35.339631  532844 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-887672" [6aee732f-0e4f-4362-b2d5-38e533a146c4] Running
	I0127 12:36:35.339636  532844 system_pods.go:61] "kube-proxy-xl46c" [c2ddd14b-3d9e-4985-935e-5f64d188e68e] Running
	I0127 12:36:35.339641  532844 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-887672" [7a436b79-cc6a-4311-9cb6-24537ed6aed0] Running
	I0127 12:36:35.339652  532844 system_pods.go:61] "metrics-server-f79f97bbb-twqz4" [107a2af6-937d-4c95-a8dd-f47f59dd3afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:35.339659  532844 system_pods.go:61] "storage-provisioner" [ebd493f5-ab93-4083-8174-aceb44741e99] Running
	I0127 12:36:35.339675  532844 system_pods.go:74] duration metric: took 181.791009ms to wait for pod list to return data ...
	I0127 12:36:35.339689  532844 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:35.528977  532844 default_sa.go:45] found service account: "default"
	I0127 12:36:35.529018  532844 default_sa.go:55] duration metric: took 189.31757ms for default service account to be created ...
	I0127 12:36:35.529033  532844 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:36:35.732388  532844 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	3ceaaa73498cf       523cad1a4df73       34 seconds ago      Exited              dashboard-metrics-scraper   9                   0573e71b6e2a1       dashboard-metrics-scraper-86c6bf9756-kd8j9
	4b65326b3a3c3       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   ca768cf27c29d       kubernetes-dashboard-7779f9b69b-4vdvf
	dc2d31b650f7f       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   4d79d92112052       storage-provisioner
	e204bce6ab533       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   ebc54e95eb844       coredns-668d6bf9bc-wwb9p
	6a071a9d5905b       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   28aa601e02f72       coredns-668d6bf9bc-v9stn
	22d83b17aba0d       e29f9c7391fd9       21 minutes ago      Running             kube-proxy                  0                   7286d10309151       kube-proxy-bbnm2
	b3e3a512c59dc       a9e7e6b294baf       21 minutes ago      Running             etcd                        2                   1375b8aa414ea       etcd-no-preload-215237
	da65aa22e920d       2b0d6572d062c       21 minutes ago      Running             kube-scheduler              2                   6eec9ecbf79af       kube-scheduler-no-preload-215237
	41ac70a4bacec       019ee182b58e2       21 minutes ago      Running             kube-controller-manager     2                   d6b3b59aaa35c       kube-controller-manager-no-preload-215237
	95aa57ca824e9       95c0bda56fc4d       21 minutes ago      Running             kube-apiserver              2                   53ffb55d3c5e4       kube-apiserver-no-preload-215237
	
	
	==> containerd <==
	Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.506540428Z" level=info msg="StartContainer for \"ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383\" returns successfully"
	Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.551103428Z" level=info msg="shim disconnected" id=ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383 namespace=k8s.io
	Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.551240535Z" level=warning msg="cleaning up after shim disconnected" id=ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383 namespace=k8s.io
	Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.551361180Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.551571153Z" level=error msg="collecting metrics for ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383" error="ttrpc: closed: unknown"
	Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.568298936Z" level=warning msg="cleanup warnings time=\"2025-01-27T12:51:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.785024552Z" level=info msg="RemoveContainer for \"40e4bd940c7e40cf969b1dc3a54c32be8e002e8159e3f01c49725e3b27dc4cac\""
	Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.791916163Z" level=info msg="RemoveContainer for \"40e4bd940c7e40cf969b1dc3a54c32be8e002e8159e3f01c49725e3b27dc4cac\" returns successfully"
	Jan 27 12:51:48 no-preload-215237 containerd[562]: time="2025-01-27T12:51:48.409506279Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:51:48 no-preload-215237 containerd[562]: time="2025-01-27T12:51:48.419140195Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 12:51:48 no-preload-215237 containerd[562]: time="2025-01-27T12:51:48.421212040Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 12:51:48 no-preload-215237 containerd[562]: time="2025-01-27T12:51:48.421431332Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.411523061Z" level=info msg="CreateContainer within sandbox \"0573e71b6e2a1421d0e3e5116b4f8b6c50a4b1d8ea3371d33246ede8628de50e\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.433556422Z" level=info msg="CreateContainer within sandbox \"0573e71b6e2a1421d0e3e5116b4f8b6c50a4b1d8ea3371d33246ede8628de50e\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c\""
	Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.434538985Z" level=info msg="StartContainer for \"3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c\""
	Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.501948496Z" level=info msg="StartContainer for \"3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c\" returns successfully"
	Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.543428109Z" level=info msg="shim disconnected" id=3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c namespace=k8s.io
	Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.543542819Z" level=warning msg="cleaning up after shim disconnected" id=3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c namespace=k8s.io
	Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.543620521Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:56:42 no-preload-215237 containerd[562]: time="2025-01-27T12:56:42.486662703Z" level=info msg="RemoveContainer for \"ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383\""
	Jan 27 12:56:42 no-preload-215237 containerd[562]: time="2025-01-27T12:56:42.494590344Z" level=info msg="RemoveContainer for \"ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383\" returns successfully"
	Jan 27 12:57:02 no-preload-215237 containerd[562]: time="2025-01-27T12:57:02.409235664Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:57:02 no-preload-215237 containerd[562]: time="2025-01-27T12:57:02.418944317Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 12:57:02 no-preload-215237 containerd[562]: time="2025-01-27T12:57:02.420533982Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 12:57:02 no-preload-215237 containerd[562]: time="2025-01-27T12:57:02.420593937Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [6a071a9d5905bd462eb5828e287847c360395e9bdf44b10604521331ed76dc38] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e204bce6ab533b2ab5f3991efb9bf4c39b985dfdfcda79400757ae9cc2b16401] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-215237
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-215237
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=no-preload-215237
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_35_36_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:35:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-215237
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:57:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:55:29 +0000   Mon, 27 Jan 2025 12:35:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:55:29 +0000   Mon, 27 Jan 2025 12:35:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:55:29 +0000   Mon, 27 Jan 2025 12:35:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:55:29 +0000   Mon, 27 Jan 2025 12:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.127
	  Hostname:    no-preload-215237
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9ae0e5191349457197e5e70ea74d2584
	  System UUID:                9ae0e519-1349-4571-97e5-e70ea74d2584
	  Boot ID:                    87718bc9-62ae-4833-b9af-6d0031a85e3e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-v9stn                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-wwb9p                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-215237                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-215237              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-215237     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-bbnm2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-215237              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-lqck5                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-kd8j9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-4vdvf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-215237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-215237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-215237 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-215237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-215237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-215237 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-215237 event: Registered Node no-preload-215237 in Controller
	
	
	==> dmesg <==
	[  +0.038094] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.819415] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.019120] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.558800] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan27 12:31] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +0.058765] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.046408] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.155095] systemd-fstab-generator[510]: Ignoring "noauto" option for root device
	[  +0.139464] systemd-fstab-generator[522]: Ignoring "noauto" option for root device
	[  +0.280508] systemd-fstab-generator[553]: Ignoring "noauto" option for root device
	[  +1.713990] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +1.806706] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +0.853658] kauditd_printk_skb: 225 callbacks suppressed
	[  +5.037897] kauditd_printk_skb: 50 callbacks suppressed
	[ +11.338835] kauditd_printk_skb: 71 callbacks suppressed
	[Jan27 12:35] systemd-fstab-generator[3024]: Ignoring "noauto" option for root device
	[  +6.061721] systemd-fstab-generator[3397]: Ignoring "noauto" option for root device
	[  +0.105541] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.139082] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.304385] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +5.315069] kauditd_printk_skb: 112 callbacks suppressed
	[  +7.866581] kauditd_printk_skb: 1 callbacks suppressed
	[Jan27 12:36] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [b3e3a512c59dcf9411744c4bac1c26316107acd555b51dc8f450d5bb4237410d] <==
	{"level":"info","ts":"2025-01-27T12:35:31.070363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:35:31.071217Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:35:31.076095Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:35:31.084228Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.127:2379"}
	{"level":"info","ts":"2025-01-27T12:35:31.088943Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:35:31.080713Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T12:35:31.089415Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T12:35:31.083432Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2b0928dff5fc0b2","local-member-id":"aed9602068d4a4e0","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:35:31.089992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:35:31.092430Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:35:31.097719Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T12:35:47.689584Z","caller":"traceutil/trace.go:171","msg":"trace[1288856585] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"123.552333ms","start":"2025-01-27T12:35:47.565958Z","end":"2025-01-27T12:35:47.689510Z","steps":["trace[1288856585] 'process raft request'  (duration: 123.456666ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:35:49.305994Z","caller":"traceutil/trace.go:171","msg":"trace[1251088634] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"100.461726ms","start":"2025-01-27T12:35:49.205505Z","end":"2025-01-27T12:35:49.305967Z","steps":["trace[1251088634] 'process raft request'  (duration: 99.751683ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:35:53.452191Z","caller":"traceutil/trace.go:171","msg":"trace[1851918262] transaction","detail":"{read_only:false; response_revision:536; number_of_response:1; }","duration":"114.283914ms","start":"2025-01-27T12:35:53.337879Z","end":"2025-01-27T12:35:53.452163Z","steps":["trace[1851918262] 'process raft request'  (duration: 113.060063ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:36:32.874740Z","caller":"traceutil/trace.go:171","msg":"trace[557556585] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"136.24693ms","start":"2025-01-27T12:36:32.738464Z","end":"2025-01-27T12:36:32.874711Z","steps":["trace[557556585] 'process raft request'  (duration: 136.053337ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:36:32.879496Z","caller":"traceutil/trace.go:171","msg":"trace[1453627606] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"132.561219ms","start":"2025-01-27T12:36:32.746916Z","end":"2025-01-27T12:36:32.879478Z","steps":["trace[1453627606] 'process raft request'  (duration: 132.021847ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:45:31.447034Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":869}
	{"level":"info","ts":"2025-01-27T12:45:31.488475Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":869,"took":"40.382605ms","hash":2608631459,"current-db-size-bytes":2920448,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2920448,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-27T12:45:31.488704Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2608631459,"revision":869,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T12:50:31.454455Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1120}
	{"level":"info","ts":"2025-01-27T12:50:31.459411Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1120,"took":"4.280134ms","hash":1951365307,"current-db-size-bytes":2920448,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1728512,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T12:50:31.459474Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1951365307,"revision":1120,"compact-revision":869}
	{"level":"info","ts":"2025-01-27T12:55:31.464047Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1372}
	{"level":"info","ts":"2025-01-27T12:55:31.469871Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1372,"took":"4.593062ms","hash":4111361700,"current-db-size-bytes":2920448,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1843200,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:55:31.469971Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4111361700,"revision":1372,"compact-revision":1120}
	
	
	==> kernel <==
	 12:57:15 up 26 min,  0 users,  load average: 0.09, 0.11, 0.09
	Linux no-preload-215237 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [95aa57ca824e97918bf0d2b243865c20f33b1c15de12407fc1b20ba49b450296] <==
	I0127 12:53:33.932968       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:53:33.933021       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:55:32.931531       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:55:32.931762       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:55:33.933996       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:55:33.934154       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:55:33.934040       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:55:33.934294       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 12:55:33.935596       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:55:33.935632       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:56:33.936704       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:56:33.937084       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:56:33.937253       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:56:33.937323       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 12:56:33.938298       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:56:33.938502       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [41ac70a4bacec6104f094ac80a9205904f6f390bee466b2a7f3baa56d349f7ff] <==
	I0127 12:52:09.747484       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:52:14.422176       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="136.591µs"
	E0127 12:52:39.699955       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:52:39.754247       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:53:09.706535       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:53:09.761805       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:53:39.713157       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:53:39.771001       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:54:09.720146       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:54:09.778901       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:54:39.727498       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:54:39.786482       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:55:09.734966       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:55:09.795665       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:55:29.263823       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-215237"
	E0127 12:55:39.741376       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:55:39.802752       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:56:09.748042       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:56:09.809143       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:56:39.755134       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:56:39.816685       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:56:42.504601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="1.171328ms"
	I0127 12:56:48.049166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="85.901µs"
	E0127 12:57:09.761583       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:57:09.823165       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [22d83b17aba0d308213867c1019db87a7dcd2fb74c0992663a062867e498094b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:35:40.803179       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:35:40.815043       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.127"]
	E0127 12:35:40.815116       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:35:40.912808       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:35:40.912846       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:35:40.912867       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:35:40.916261       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:35:40.916860       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:35:40.916894       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:35:40.918730       1 config.go:199] "Starting service config controller"
	I0127 12:35:40.918778       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:35:40.918801       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:35:40.918819       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:35:40.923786       1 config.go:329] "Starting node config controller"
	I0127 12:35:40.923799       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:35:41.019353       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:35:41.019396       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:35:41.026373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [da65aa22e920d0c8384e67aacfe137551310ab4661c3159a4babe77fa7cdacf3] <==
	W0127 12:35:33.849575       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 12:35:33.849640       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:33.866527       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 12:35:33.866834       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:33.889220       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 12:35:33.889531       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:33.899656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 12:35:33.900089       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:33.910481       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 12:35:33.910523       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:33.924056       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 12:35:33.924147       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 12:35:34.082107       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 12:35:34.082171       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:34.086192       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 12:35:34.086245       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:34.128167       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 12:35:34.128677       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:34.177503       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:35:34.177572       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:34.272348       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 12:35:34.272421       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:34.289501       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 12:35:34.289553       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:35:35.921663       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:56:12 no-preload-215237 kubelet[3404]: E0127 12:56:12.408710    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
	Jan 27 12:56:19 no-preload-215237 kubelet[3404]: E0127 12:56:19.411446    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-lqck5" podUID="3447c2da-cbb0-412c-a8d9-2be32c8e6dad"
	Jan 27 12:56:26 no-preload-215237 kubelet[3404]: I0127 12:56:26.407996    3404 scope.go:117] "RemoveContainer" containerID="ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383"
	Jan 27 12:56:26 no-preload-215237 kubelet[3404]: E0127 12:56:26.408635    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
	Jan 27 12:56:34 no-preload-215237 kubelet[3404]: E0127 12:56:34.409531    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-lqck5" podUID="3447c2da-cbb0-412c-a8d9-2be32c8e6dad"
	Jan 27 12:56:35 no-preload-215237 kubelet[3404]: E0127 12:56:35.427562    3404 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:56:35 no-preload-215237 kubelet[3404]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:56:35 no-preload-215237 kubelet[3404]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:56:35 no-preload-215237 kubelet[3404]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:56:35 no-preload-215237 kubelet[3404]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:56:41 no-preload-215237 kubelet[3404]: I0127 12:56:41.408787    3404 scope.go:117] "RemoveContainer" containerID="ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383"
	Jan 27 12:56:42 no-preload-215237 kubelet[3404]: I0127 12:56:42.483701    3404 scope.go:117] "RemoveContainer" containerID="ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383"
	Jan 27 12:56:42 no-preload-215237 kubelet[3404]: I0127 12:56:42.484368    3404 scope.go:117] "RemoveContainer" containerID="3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c"
	Jan 27 12:56:42 no-preload-215237 kubelet[3404]: E0127 12:56:42.484596    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
	Jan 27 12:56:48 no-preload-215237 kubelet[3404]: I0127 12:56:48.031649    3404 scope.go:117] "RemoveContainer" containerID="3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c"
	Jan 27 12:56:48 no-preload-215237 kubelet[3404]: E0127 12:56:48.031952    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
	Jan 27 12:56:48 no-preload-215237 kubelet[3404]: E0127 12:56:48.409021    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-lqck5" podUID="3447c2da-cbb0-412c-a8d9-2be32c8e6dad"
	Jan 27 12:57:00 no-preload-215237 kubelet[3404]: I0127 12:57:00.408211    3404 scope.go:117] "RemoveContainer" containerID="3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c"
	Jan 27 12:57:00 no-preload-215237 kubelet[3404]: E0127 12:57:00.408580    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
	Jan 27 12:57:02 no-preload-215237 kubelet[3404]: E0127 12:57:02.420922    3404 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 12:57:02 no-preload-215237 kubelet[3404]: E0127 12:57:02.421390    3404 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 12:57:02 no-preload-215237 kubelet[3404]: E0127 12:57:02.421771    3404 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nrcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-lqck5_kube-system(3447c2da-cbb0-412c-a8d9-2be32c8e6dad): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 12:57:02 no-preload-215237 kubelet[3404]: E0127 12:57:02.423194    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-lqck5" podUID="3447c2da-cbb0-412c-a8d9-2be32c8e6dad"
	Jan 27 12:57:14 no-preload-215237 kubelet[3404]: I0127 12:57:14.407961    3404 scope.go:117] "RemoveContainer" containerID="3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c"
	Jan 27 12:57:14 no-preload-215237 kubelet[3404]: E0127 12:57:14.408592    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
	
	
	==> kubernetes-dashboard [4b65326b3a3c311cd62ce540884e41956b6fbed40d4755dbbf0bff3c4de481fd] <==
	2025/01/27 12:44:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:45:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:45:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:46:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:46:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:47:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:47:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:48:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:48:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:49:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:49:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:50:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:50:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:51:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:51:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:52:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:52:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:53:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:53:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:54:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:54:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:55:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:55:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:56:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:56:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [dc2d31b650f7fa67ecb57ffc495e5d3fe523cd58f39ed357acc14aed652476d0] <==
	I0127 12:35:43.036174       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 12:35:43.097982       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 12:35:43.098250       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 12:35:43.129850       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 12:35:43.130155       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-215237_073fd1a8-0a3b-49a1-b9f9-7c7d9f226e85!
	I0127 12:35:43.131603       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58db0c48-2dc1-4940-89af-b87e3848859b", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-215237_073fd1a8-0a3b-49a1-b9f9-7c7d9f226e85 became leader
	I0127 12:35:43.231514       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-215237_073fd1a8-0a3b-49a1-b9f9-7c7d9f226e85!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-215237 -n no-preload-215237
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-215237 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-lqck5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-215237 describe pod metrics-server-f79f97bbb-lqck5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-215237 describe pod metrics-server-f79f97bbb-lqck5: exit status 1 (61.464991ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-lqck5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-215237 describe pod metrics-server-f79f97bbb-lqck5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1596.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1619.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-346100 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 12:31:08.436436  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:18.024767  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:19.884158  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:19.890556  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:19.902003  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:19.923457  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:19.964805  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:20.046232  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:20.207769  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:20.529470  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:21.171458  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:22.453558  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-346100 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m57.102714662s)

                                                
                                                
-- stdout --
	* [embed-certs-346100] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-346100" primary control-plane node in "embed-certs-346100" cluster
	* Restarting existing kvm2 VM for "embed-certs-346100" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-346100 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:31:02.351646  532607 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:31:02.351771  532607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:02.351782  532607 out.go:358] Setting ErrFile to fd 2...
	I0127 12:31:02.351790  532607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:02.351978  532607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 12:31:02.352513  532607 out.go:352] Setting JSON to false
	I0127 12:31:02.353504  532607 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11605,"bootTime":1737969457,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:31:02.353602  532607 start.go:139] virtualization: kvm guest
	I0127 12:31:02.355881  532607 out.go:177] * [embed-certs-346100] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:31:02.357280  532607 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:31:02.357279  532607 notify.go:220] Checking for updates...
	I0127 12:31:02.359614  532607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:31:02.360705  532607 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:31:02.361930  532607 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 12:31:02.363158  532607 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:31:02.364229  532607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:31:02.365633  532607 config.go:182] Loaded profile config "embed-certs-346100": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:31:02.365989  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:31:02.366051  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:02.380712  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0127 12:31:02.381086  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:02.381659  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:31:02.381678  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:02.381996  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:02.382266  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:31:02.382530  532607 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:31:02.382797  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:31:02.382831  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:02.397312  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	I0127 12:31:02.397675  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:02.398134  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:31:02.398152  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:02.398454  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:02.398652  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:31:02.433187  532607 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:31:02.434336  532607 start.go:297] selected driver: kvm2
	I0127 12:31:02.434352  532607 start.go:901] validating driver "kvm2" against &{Name:embed-certs-346100 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-346100 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.206 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:02.434464  532607 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:31:02.435096  532607 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:02.435210  532607 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-471120/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:31:02.449794  532607 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:31:02.450181  532607 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:31:02.450217  532607 cni.go:84] Creating CNI manager for ""
	I0127 12:31:02.450250  532607 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:31:02.450283  532607 start.go:340] cluster config:
	{Name:embed-certs-346100 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-346100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.206 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:02.450405  532607 iso.go:125] acquiring lock: {Name:mkc6ca3cbb5528e67f6dc9da0188f358e9fee620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:02.451869  532607 out.go:177] * Starting "embed-certs-346100" primary control-plane node in "embed-certs-346100" cluster
	I0127 12:31:02.452914  532607 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:31:02.452946  532607 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 12:31:02.452957  532607 cache.go:56] Caching tarball of preloaded images
	I0127 12:31:02.453043  532607 preload.go:172] Found /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:31:02.453057  532607 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 12:31:02.453140  532607 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/embed-certs-346100/config.json ...
	I0127 12:31:02.453312  532607 start.go:360] acquireMachinesLock for embed-certs-346100: {Name:mk818835aef0de701295cc2c98fea95e1be33202 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:31:02.453384  532607 start.go:364] duration metric: took 53.758µs to acquireMachinesLock for "embed-certs-346100"
	I0127 12:31:02.453401  532607 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:31:02.453408  532607 fix.go:54] fixHost starting: 
	I0127 12:31:02.453644  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:31:02.453679  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:02.467150  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
	I0127 12:31:02.467635  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:02.468152  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:31:02.468176  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:02.468478  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:02.468672  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:31:02.468812  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:31:02.470286  532607 fix.go:112] recreateIfNeeded on embed-certs-346100: state=Stopped err=<nil>
	I0127 12:31:02.470322  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	W0127 12:31:02.470481  532607 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:31:02.472003  532607 out.go:177] * Restarting existing kvm2 VM for "embed-certs-346100" ...
	I0127 12:31:02.472895  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Start
	I0127 12:31:02.473052  532607 main.go:141] libmachine: (embed-certs-346100) starting domain...
	I0127 12:31:02.473072  532607 main.go:141] libmachine: (embed-certs-346100) ensuring networks are active...
	I0127 12:31:02.473944  532607 main.go:141] libmachine: (embed-certs-346100) Ensuring network default is active
	I0127 12:31:02.474270  532607 main.go:141] libmachine: (embed-certs-346100) Ensuring network mk-embed-certs-346100 is active
	I0127 12:31:02.474919  532607 main.go:141] libmachine: (embed-certs-346100) getting domain XML...
	I0127 12:31:02.476760  532607 main.go:141] libmachine: (embed-certs-346100) creating domain...
	I0127 12:31:03.756422  532607 main.go:141] libmachine: (embed-certs-346100) waiting for IP...
	I0127 12:31:03.757267  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:03.757737  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:03.757828  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:03.757742  532643 retry.go:31] will retry after 279.824008ms: waiting for domain to come up
	I0127 12:31:04.039440  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:04.040070  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:04.040119  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:04.040025  532643 retry.go:31] will retry after 261.651185ms: waiting for domain to come up
	I0127 12:31:04.303651  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:04.304287  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:04.304318  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:04.304240  532643 retry.go:31] will retry after 427.586533ms: waiting for domain to come up
	I0127 12:31:04.734021  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:04.734543  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:04.734571  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:04.734513  532643 retry.go:31] will retry after 540.47106ms: waiting for domain to come up
	I0127 12:31:05.276289  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:05.276818  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:05.276849  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:05.276790  532643 retry.go:31] will retry after 760.788836ms: waiting for domain to come up
	I0127 12:31:06.038907  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:06.039451  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:06.039494  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:06.039399  532643 retry.go:31] will retry after 933.527271ms: waiting for domain to come up
	I0127 12:31:06.974718  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:06.975223  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:06.975282  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:06.975188  532643 retry.go:31] will retry after 1.153949364s: waiting for domain to come up
	I0127 12:31:08.131021  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:08.131521  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:08.131553  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:08.131483  532643 retry.go:31] will retry after 1.116013707s: waiting for domain to come up
	I0127 12:31:09.248718  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:09.249274  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:09.249300  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:09.249225  532643 retry.go:31] will retry after 1.245600676s: waiting for domain to come up
	I0127 12:31:10.496782  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:10.497325  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:10.497352  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:10.497286  532643 retry.go:31] will retry after 2.294647723s: waiting for domain to come up
	I0127 12:31:12.793702  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:12.794240  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:12.794273  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:12.794182  532643 retry.go:31] will retry after 1.779650193s: waiting for domain to come up
	I0127 12:31:14.574915  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:14.575504  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:14.575524  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:14.575471  532643 retry.go:31] will retry after 2.405543503s: waiting for domain to come up
	I0127 12:31:16.982253  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:16.982754  532607 main.go:141] libmachine: (embed-certs-346100) DBG | unable to find current IP address of domain embed-certs-346100 in network mk-embed-certs-346100
	I0127 12:31:16.982780  532607 main.go:141] libmachine: (embed-certs-346100) DBG | I0127 12:31:16.982713  532643 retry.go:31] will retry after 4.439241304s: waiting for domain to come up
	I0127 12:31:21.426652  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.427220  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has current primary IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.427247  532607 main.go:141] libmachine: (embed-certs-346100) found domain IP: 192.168.50.206
	I0127 12:31:21.427260  532607 main.go:141] libmachine: (embed-certs-346100) reserving static IP address...
	I0127 12:31:21.427717  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "embed-certs-346100", mac: "52:54:00:8f:cd:c0", ip: "192.168.50.206"} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:21.427753  532607 main.go:141] libmachine: (embed-certs-346100) DBG | skip adding static IP to network mk-embed-certs-346100 - found existing host DHCP lease matching {name: "embed-certs-346100", mac: "52:54:00:8f:cd:c0", ip: "192.168.50.206"}
	I0127 12:31:21.427779  532607 main.go:141] libmachine: (embed-certs-346100) reserved static IP address 192.168.50.206 for domain embed-certs-346100
	I0127 12:31:21.427801  532607 main.go:141] libmachine: (embed-certs-346100) waiting for SSH...
	I0127 12:31:21.427815  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Getting to WaitForSSH function...
	I0127 12:31:21.429935  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.430289  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:21.430317  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.430442  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Using SSH client type: external
	I0127 12:31:21.430464  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa (-rw-------)
	I0127 12:31:21.430501  532607 main.go:141] libmachine: (embed-certs-346100) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:31:21.430511  532607 main.go:141] libmachine: (embed-certs-346100) DBG | About to run SSH command:
	I0127 12:31:21.430520  532607 main.go:141] libmachine: (embed-certs-346100) DBG | exit 0
	I0127 12:31:21.552254  532607 main.go:141] libmachine: (embed-certs-346100) DBG | SSH cmd err, output: <nil>: 
	I0127 12:31:21.552642  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetConfigRaw
	I0127 12:31:21.553390  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetIP
	I0127 12:31:21.555762  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.556237  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:21.556262  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.556546  532607 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/embed-certs-346100/config.json ...
	I0127 12:31:21.556729  532607 machine.go:93] provisionDockerMachine start ...
	I0127 12:31:21.556768  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:31:21.556954  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:31:21.559167  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.559447  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:21.559470  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.559557  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:31:21.559732  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:21.559950  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:21.560109  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:31:21.560286  532607 main.go:141] libmachine: Using SSH client type: native
	I0127 12:31:21.560473  532607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.206 22 <nil> <nil>}
	I0127 12:31:21.560483  532607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:31:21.664378  532607 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:31:21.664418  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetMachineName
	I0127 12:31:21.664633  532607 buildroot.go:166] provisioning hostname "embed-certs-346100"
	I0127 12:31:21.664657  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetMachineName
	I0127 12:31:21.664885  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:31:21.667659  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.668013  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:21.668036  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.668201  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:31:21.668390  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:21.668590  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:21.668786  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:31:21.668965  532607 main.go:141] libmachine: Using SSH client type: native
	I0127 12:31:21.669154  532607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.206 22 <nil> <nil>}
	I0127 12:31:21.669178  532607 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-346100 && echo "embed-certs-346100" | sudo tee /etc/hostname
	I0127 12:31:21.786790  532607 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-346100
	
	I0127 12:31:21.786821  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:31:21.789914  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.790308  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:21.790337  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.790513  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:31:21.790721  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:21.790934  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:21.791079  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:31:21.791228  532607 main.go:141] libmachine: Using SSH client type: native
	I0127 12:31:21.791449  532607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.206 22 <nil> <nil>}
	I0127 12:31:21.791466  532607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-346100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-346100/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-346100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:31:21.899853  532607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:31:21.899878  532607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-471120/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-471120/.minikube}
	I0127 12:31:21.899897  532607 buildroot.go:174] setting up certificates
	I0127 12:31:21.899906  532607 provision.go:84] configureAuth start
	I0127 12:31:21.899914  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetMachineName
	I0127 12:31:21.900138  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetIP
	I0127 12:31:21.902586  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.902920  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:21.902949  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.903133  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:31:21.905440  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.905805  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:21.905837  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:21.906005  532607 provision.go:143] copyHostCerts
	I0127 12:31:21.906063  532607 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem, removing ...
	I0127 12:31:21.906088  532607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem
	I0127 12:31:21.906160  532607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem (1679 bytes)
	I0127 12:31:21.906268  532607 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem, removing ...
	I0127 12:31:21.906280  532607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem
	I0127 12:31:21.906309  532607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem (1082 bytes)
	I0127 12:31:21.906425  532607 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem, removing ...
	I0127 12:31:21.906436  532607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem
	I0127 12:31:21.906463  532607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem (1123 bytes)
	I0127 12:31:21.906528  532607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem org=jenkins.embed-certs-346100 san=[127.0.0.1 192.168.50.206 embed-certs-346100 localhost minikube]
	I0127 12:31:22.036691  532607 provision.go:177] copyRemoteCerts
	I0127 12:31:22.036766  532607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:31:22.036791  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:31:22.039218  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.039510  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:22.039539  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.039679  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:31:22.039846  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:22.040010  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:31:22.040134  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:31:22.122289  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:31:22.145436  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:31:22.166878  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 12:31:22.188221  532607 provision.go:87] duration metric: took 288.303708ms to configureAuth
	I0127 12:31:22.188245  532607 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:31:22.188416  532607 config.go:182] Loaded profile config "embed-certs-346100": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:31:22.188428  532607 machine.go:96] duration metric: took 631.670767ms to provisionDockerMachine
	I0127 12:31:22.188439  532607 start.go:293] postStartSetup for "embed-certs-346100" (driver="kvm2")
	I0127 12:31:22.188451  532607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:31:22.188484  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:31:22.188846  532607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:31:22.188876  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:31:22.691933  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.692413  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:22.692448  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.692625  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:31:22.692862  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:22.693044  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:31:22.693198  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:31:22.774980  532607 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:31:22.779475  532607 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:31:22.779508  532607 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/addons for local assets ...
	I0127 12:31:22.779576  532607 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/files for local assets ...
	I0127 12:31:22.779799  532607 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem -> 4783872.pem in /etc/ssl/certs
	I0127 12:31:22.779910  532607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:31:22.789085  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:31:22.815270  532607 start.go:296] duration metric: took 626.81676ms for postStartSetup
	I0127 12:31:22.815315  532607 fix.go:56] duration metric: took 20.361904973s for fixHost
	I0127 12:31:22.815339  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:31:22.819234  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.819707  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:22.819740  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.820107  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:31:22.820294  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:22.820475  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:22.820622  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:31:22.820830  532607 main.go:141] libmachine: Using SSH client type: native
	I0127 12:31:22.821116  532607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.206 22 <nil> <nil>}
	I0127 12:31:22.821190  532607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:31:22.933563  532607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981082.892358986
	
	I0127 12:31:22.933587  532607 fix.go:216] guest clock: 1737981082.892358986
	I0127 12:31:22.933596  532607 fix.go:229] Guest: 2025-01-27 12:31:22.892358986 +0000 UTC Remote: 2025-01-27 12:31:22.815320545 +0000 UTC m=+20.500343870 (delta=77.038441ms)
	I0127 12:31:22.933642  532607 fix.go:200] guest clock delta is within tolerance: 77.038441ms
	I0127 12:31:22.933652  532607 start.go:83] releasing machines lock for "embed-certs-346100", held for 20.480256244s
	I0127 12:31:22.933681  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:31:22.933981  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetIP
	I0127 12:31:22.937139  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.937505  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:22.937536  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.937747  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:31:22.938211  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:31:22.938364  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:31:22.938445  532607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:31:22.938500  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:31:22.938533  532607 ssh_runner.go:195] Run: cat /version.json
	I0127 12:31:22.938555  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:31:22.941434  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.941738  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.941787  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:22.941807  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.941984  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:31:22.942131  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:22.942155  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:22.942189  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:22.942297  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:31:22.942375  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:31:22.942450  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:31:22.942524  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:31:22.942548  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:31:22.942647  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:31:23.056567  532607 ssh_runner.go:195] Run: systemctl --version
	I0127 12:31:23.064576  532607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:31:23.071957  532607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:31:23.072025  532607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:31:23.094414  532607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:31:23.094439  532607 start.go:495] detecting cgroup driver to use...
	I0127 12:31:23.094501  532607 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:31:23.127599  532607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:31:23.141023  532607 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:31:23.141084  532607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:31:23.155869  532607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:31:23.174473  532607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:31:23.320531  532607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:31:23.505310  532607 docker.go:233] disabling docker service ...
	I0127 12:31:23.505392  532607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:31:23.522482  532607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:31:23.536081  532607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:31:23.658882  532607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:31:23.788187  532607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:31:23.801848  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:31:23.826392  532607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:31:23.838563  532607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:31:23.849493  532607 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:31:23.849562  532607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:31:23.859528  532607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:31:23.869595  532607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:31:23.879264  532607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:31:23.888581  532607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:31:23.898119  532607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:31:23.907521  532607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:31:23.917056  532607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:31:23.926945  532607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:31:23.936139  532607 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:31:23.936183  532607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:31:23.950034  532607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:31:23.961204  532607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:31:24.091895  532607 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:31:24.118796  532607 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 12:31:24.118858  532607 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:31:24.123479  532607 retry.go:31] will retry after 1.349905672s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 12:31:25.473988  532607 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:31:25.478946  532607 start.go:563] Will wait 60s for crictl version
	I0127 12:31:25.479010  532607 ssh_runner.go:195] Run: which crictl
	I0127 12:31:25.482577  532607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:31:25.527870  532607 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 12:31:25.527951  532607 ssh_runner.go:195] Run: containerd --version
	I0127 12:31:25.553829  532607 ssh_runner.go:195] Run: containerd --version
	I0127 12:31:25.578999  532607 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 12:31:25.580111  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetIP
	I0127 12:31:25.583233  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:25.583722  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:31:25.583751  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:31:25.584011  532607 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 12:31:25.587766  532607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:31:25.601050  532607 kubeadm.go:883] updating cluster {Name:embed-certs-346100 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-346100 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.206 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:31:25.601219  532607 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:31:25.601289  532607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:31:25.633915  532607 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:31:25.633942  532607 containerd.go:534] Images already preloaded, skipping extraction
	I0127 12:31:25.634007  532607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:31:25.665000  532607 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:31:25.665026  532607 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:31:25.665036  532607 kubeadm.go:934] updating node { 192.168.50.206 8443 v1.32.1 containerd true true} ...
	I0127 12:31:25.665167  532607 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-346100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-346100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:31:25.665228  532607 ssh_runner.go:195] Run: sudo crictl info
	I0127 12:31:25.700720  532607 cni.go:84] Creating CNI manager for ""
	I0127 12:31:25.700761  532607 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:31:25.700776  532607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:31:25.700808  532607 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.206 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-346100 NodeName:embed-certs-346100 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:31:25.701000  532607 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-346100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:31:25.701082  532607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:31:25.710745  532607 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:31:25.710811  532607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:31:25.719665  532607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0127 12:31:25.737718  532607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:31:25.753442  532607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2314 bytes)
	I0127 12:31:25.769059  532607 ssh_runner.go:195] Run: grep 192.168.50.206	control-plane.minikube.internal$ /etc/hosts
	I0127 12:31:25.772434  532607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:31:25.784331  532607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:31:25.915176  532607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:31:25.932832  532607 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/embed-certs-346100 for IP: 192.168.50.206
	I0127 12:31:25.932864  532607 certs.go:194] generating shared ca certs ...
	I0127 12:31:25.932888  532607 certs.go:226] acquiring lock for ca certs: {Name:mk02d117412837bd489768267e2b174e6c3ff6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:31:25.933098  532607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key
	I0127 12:31:25.933158  532607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key
	I0127 12:31:25.933175  532607 certs.go:256] generating profile certs ...
	I0127 12:31:25.933267  532607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/embed-certs-346100/client.key
	I0127 12:31:25.933324  532607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/embed-certs-346100/apiserver.key.9db97079
	I0127 12:31:25.933392  532607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/embed-certs-346100/proxy-client.key
	I0127 12:31:25.933570  532607 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem (1338 bytes)
	W0127 12:31:25.933617  532607 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387_empty.pem, impossibly tiny 0 bytes
	I0127 12:31:25.933638  532607 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:31:25.933679  532607 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:31:25.933720  532607 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:31:25.933776  532607 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem (1679 bytes)
	I0127 12:31:25.933854  532607 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:31:25.934752  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:31:25.972077  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:31:25.999202  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:31:26.031076  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:31:26.075509  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/embed-certs-346100/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 12:31:26.102248  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/embed-certs-346100/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:31:26.124163  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/embed-certs-346100/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:31:26.149149  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/embed-certs-346100/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:31:26.175445  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /usr/share/ca-certificates/4783872.pem (1708 bytes)
	I0127 12:31:26.202778  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:31:26.230053  532607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem --> /usr/share/ca-certificates/478387.pem (1338 bytes)
	I0127 12:31:26.256329  532607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:31:26.273395  532607 ssh_runner.go:195] Run: openssl version
	I0127 12:31:26.279227  532607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783872.pem && ln -fs /usr/share/ca-certificates/4783872.pem /etc/ssl/certs/4783872.pem"
	I0127 12:31:26.289167  532607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783872.pem
	I0127 12:31:26.293208  532607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:28 /usr/share/ca-certificates/4783872.pem
	I0127 12:31:26.293266  532607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783872.pem
	I0127 12:31:26.298562  532607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783872.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:31:26.307806  532607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:31:26.318347  532607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:31:26.322698  532607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:31:26.322739  532607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:31:26.327998  532607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:31:26.337438  532607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478387.pem && ln -fs /usr/share/ca-certificates/478387.pem /etc/ssl/certs/478387.pem"
	I0127 12:31:26.346693  532607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478387.pem
	I0127 12:31:26.350553  532607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:28 /usr/share/ca-certificates/478387.pem
	I0127 12:31:26.350599  532607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478387.pem
	I0127 12:31:26.356170  532607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478387.pem /etc/ssl/certs/51391683.0"
	I0127 12:31:26.366486  532607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:31:26.370619  532607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:31:26.376193  532607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:31:26.381900  532607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:31:26.387275  532607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:31:26.393237  532607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:31:26.399243  532607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:31:26.405380  532607 kubeadm.go:392] StartCluster: {Name:embed-certs-346100 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-346100 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.206 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:26.405495  532607 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 12:31:26.405548  532607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:31:26.450043  532607 cri.go:89] found id: "8a130e97cf14fce9882a54bc6b454daabe7a7a2c77da492dd5968dcd130fc00e"
	I0127 12:31:26.450069  532607 cri.go:89] found id: "e064bf452575a61b7608d1db12689b6b4ed8edacd2ba7577482ea70d34118d9c"
	I0127 12:31:26.450075  532607 cri.go:89] found id: "a60088c0f6a605c97b6d73c8e4b9ab75e06b51b56ca52a3cfa08c06a2962a369"
	I0127 12:31:26.450080  532607 cri.go:89] found id: "56d54b897887811fdd2bb422065099e6cd4386a9a19b2414dfc1ae956da9979d"
	I0127 12:31:26.450084  532607 cri.go:89] found id: "3ebe2b1c9e3e619ace1b6e1cf7c666c6d2942bbbcd57cb2c679370d4ddca0498"
	I0127 12:31:26.450088  532607 cri.go:89] found id: "89af3cdc3afb0725fdde830ac4ebab34924afddc15c5726258f82c6f95185051"
	I0127 12:31:26.450093  532607 cri.go:89] found id: "1ed41747864014d38cd4a2ec4e16711c238fc583b993c4a0980f3c753b3c3621"
	I0127 12:31:26.450097  532607 cri.go:89] found id: ""
	I0127 12:31:26.450158  532607 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 12:31:26.463888  532607 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T12:31:26Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 12:31:26.463992  532607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:31:26.473520  532607 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:31:26.473534  532607 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:31:26.473576  532607 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:31:26.481767  532607 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:31:26.482452  532607 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-346100" does not appear in /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:31:26.482748  532607 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-471120/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-346100" cluster setting kubeconfig missing "embed-certs-346100" context setting]
	I0127 12:31:26.483303  532607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:31:26.484675  532607 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:31:26.493081  532607 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.206
	I0127 12:31:26.493116  532607 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:31:26.493131  532607 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 12:31:26.493184  532607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:31:26.529536  532607 cri.go:89] found id: "8a130e97cf14fce9882a54bc6b454daabe7a7a2c77da492dd5968dcd130fc00e"
	I0127 12:31:26.529558  532607 cri.go:89] found id: "e064bf452575a61b7608d1db12689b6b4ed8edacd2ba7577482ea70d34118d9c"
	I0127 12:31:26.529565  532607 cri.go:89] found id: "a60088c0f6a605c97b6d73c8e4b9ab75e06b51b56ca52a3cfa08c06a2962a369"
	I0127 12:31:26.529571  532607 cri.go:89] found id: "56d54b897887811fdd2bb422065099e6cd4386a9a19b2414dfc1ae956da9979d"
	I0127 12:31:26.529586  532607 cri.go:89] found id: "3ebe2b1c9e3e619ace1b6e1cf7c666c6d2942bbbcd57cb2c679370d4ddca0498"
	I0127 12:31:26.529591  532607 cri.go:89] found id: "89af3cdc3afb0725fdde830ac4ebab34924afddc15c5726258f82c6f95185051"
	I0127 12:31:26.529595  532607 cri.go:89] found id: "1ed41747864014d38cd4a2ec4e16711c238fc583b993c4a0980f3c753b3c3621"
	I0127 12:31:26.529604  532607 cri.go:89] found id: ""
	I0127 12:31:26.529611  532607 cri.go:252] Stopping containers: [8a130e97cf14fce9882a54bc6b454daabe7a7a2c77da492dd5968dcd130fc00e e064bf452575a61b7608d1db12689b6b4ed8edacd2ba7577482ea70d34118d9c a60088c0f6a605c97b6d73c8e4b9ab75e06b51b56ca52a3cfa08c06a2962a369 56d54b897887811fdd2bb422065099e6cd4386a9a19b2414dfc1ae956da9979d 3ebe2b1c9e3e619ace1b6e1cf7c666c6d2942bbbcd57cb2c679370d4ddca0498 89af3cdc3afb0725fdde830ac4ebab34924afddc15c5726258f82c6f95185051 1ed41747864014d38cd4a2ec4e16711c238fc583b993c4a0980f3c753b3c3621]
	I0127 12:31:26.529657  532607 ssh_runner.go:195] Run: which crictl
	I0127 12:31:26.533156  532607 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 8a130e97cf14fce9882a54bc6b454daabe7a7a2c77da492dd5968dcd130fc00e e064bf452575a61b7608d1db12689b6b4ed8edacd2ba7577482ea70d34118d9c a60088c0f6a605c97b6d73c8e4b9ab75e06b51b56ca52a3cfa08c06a2962a369 56d54b897887811fdd2bb422065099e6cd4386a9a19b2414dfc1ae956da9979d 3ebe2b1c9e3e619ace1b6e1cf7c666c6d2942bbbcd57cb2c679370d4ddca0498 89af3cdc3afb0725fdde830ac4ebab34924afddc15c5726258f82c6f95185051 1ed41747864014d38cd4a2ec4e16711c238fc583b993c4a0980f3c753b3c3621
	I0127 12:31:26.576661  532607 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:31:26.591798  532607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:31:26.603266  532607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:31:26.603299  532607 kubeadm.go:157] found existing configuration files:
	
	I0127 12:31:26.603367  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:31:26.611767  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:31:26.611855  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:31:26.620824  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:31:26.629914  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:31:26.629983  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:31:26.639394  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:31:26.647951  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:31:26.648031  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:31:26.656753  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:31:26.666434  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:31:26.666489  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:31:26.675225  532607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:31:26.684207  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:26.812723  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:28.024025  532607 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.211243073s)
	I0127 12:31:28.024071  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:28.229650  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:28.312773  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:28.392148  532607 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:31:28.392245  532607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:28.893006  532607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:29.392513  532607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:29.415012  532607 api_server.go:72] duration metric: took 1.022860609s to wait for apiserver process to appear ...
	I0127 12:31:29.415045  532607 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:31:29.415070  532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
	I0127 12:31:29.415560  532607 api_server.go:269] stopped: https://192.168.50.206:8443/healthz: Get "https://192.168.50.206:8443/healthz": dial tcp 192.168.50.206:8443: connect: connection refused
	I0127 12:31:29.915207  532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
	I0127 12:31:32.146930  532607 api_server.go:279] https://192.168.50.206:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:31:32.146971  532607 api_server.go:103] status: https://192.168.50.206:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:31:32.146990  532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
	I0127 12:31:32.191603  532607 api_server.go:279] https://192.168.50.206:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:31:32.191641  532607 api_server.go:103] status: https://192.168.50.206:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:31:32.416060  532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
	I0127 12:31:32.432160  532607 api_server.go:279] https://192.168.50.206:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:31:32.432196  532607 api_server.go:103] status: https://192.168.50.206:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:31:32.915224  532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
	I0127 12:31:32.922385  532607 api_server.go:279] https://192.168.50.206:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:31:32.922417  532607 api_server.go:103] status: https://192.168.50.206:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:31:33.416140  532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
	I0127 12:31:33.428422  532607 api_server.go:279] https://192.168.50.206:8443/healthz returned 200:
	ok
	I0127 12:31:33.439665  532607 api_server.go:141] control plane version: v1.32.1
	I0127 12:31:33.439696  532607 api_server.go:131] duration metric: took 4.024644536s to wait for apiserver health ...
	I0127 12:31:33.439706  532607 cni.go:84] Creating CNI manager for ""
	I0127 12:31:33.439713  532607 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:31:33.441533  532607 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:31:33.442831  532607 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:31:33.454359  532607 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:31:33.492532  532607 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:31:33.518282  532607 system_pods.go:59] 8 kube-system pods found
	I0127 12:31:33.518343  532607 system_pods.go:61] "coredns-668d6bf9bc-6vw68" [34f81f72-87b7-4d45-ab2f-3ac55258cc1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:31:33.518358  532607 system_pods.go:61] "etcd-embed-certs-346100" [f16e65d4-fc89-4470-9e84-cc7aea762d42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:31:33.518370  532607 system_pods.go:61] "kube-apiserver-embed-certs-346100" [da32f3e8-bd87-411d-b95c-e2e6c7782574] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:31:33.518393  532607 system_pods.go:61] "kube-controller-manager-embed-certs-346100" [2c7318f7-78f3-4fdf-b436-c29e46e8d6e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:31:33.518407  532607 system_pods.go:61] "kube-proxy-899f9" [c6bbfb81-7ae9-49f6-a27b-a63edbeba846] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 12:31:33.518418  532607 system_pods.go:61] "kube-scheduler-embed-certs-346100" [4c32c689-cc50-46ba-adc5-d0eaa1ab89a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:31:33.518426  532607 system_pods.go:61] "metrics-server-f79f97bbb-7qdhh" [d022191b-dc7b-42a2-a3f1-4129005be826] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:31:33.518439  532607 system_pods.go:61] "storage-provisioner" [cd095c8f-0859-4ba7-97f2-7293a942e307] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:31:33.518452  532607 system_pods.go:74] duration metric: took 25.896668ms to wait for pod list to return data ...
	I0127 12:31:33.518466  532607 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:31:33.527135  532607 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:31:33.527174  532607 node_conditions.go:123] node cpu capacity is 2
	I0127 12:31:33.527188  532607 node_conditions.go:105] duration metric: took 8.715558ms to run NodePressure ...
	I0127 12:31:33.527214  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:33.798245  532607 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 12:31:33.803075  532607 kubeadm.go:739] kubelet initialised
	I0127 12:31:33.803095  532607 kubeadm.go:740] duration metric: took 4.824189ms waiting for restarted kubelet to initialise ...
	I0127 12:31:33.803104  532607 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:31:33.808312  532607 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-6vw68" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:35.816017  532607 pod_ready.go:103] pod "coredns-668d6bf9bc-6vw68" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:38.315460  532607 pod_ready.go:103] pod "coredns-668d6bf9bc-6vw68" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:40.814524  532607 pod_ready.go:93] pod "coredns-668d6bf9bc-6vw68" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:40.814546  532607 pod_ready.go:82] duration metric: took 7.006203199s for pod "coredns-668d6bf9bc-6vw68" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:40.814555  532607 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:40.819651  532607 pod_ready.go:93] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:40.819667  532607 pod_ready.go:82] duration metric: took 5.106925ms for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:40.819675  532607 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:42.826431  532607 pod_ready.go:103] pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:44.827294  532607 pod_ready.go:103] pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:47.326571  532607 pod_ready.go:103] pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:48.325890  532607 pod_ready.go:93] pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:48.325915  532607 pod_ready.go:82] duration metric: took 7.506233045s for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:48.325925  532607 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:48.834581  532607 pod_ready.go:93] pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:48.834611  532607 pod_ready.go:82] duration metric: took 508.677669ms for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:48.834625  532607 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-899f9" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:48.839942  532607 pod_ready.go:93] pod "kube-proxy-899f9" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:48.839964  532607 pod_ready.go:82] duration metric: took 5.330274ms for pod "kube-proxy-899f9" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:48.839975  532607 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:48.845610  532607 pod_ready.go:93] pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:31:48.845636  532607 pod_ready.go:82] duration metric: took 5.651636ms for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:48.845697  532607 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:50.854110  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:53.352562  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:55.355419  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:57.359360  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:59.853551  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:02.352553  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:04.852226  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:06.852588  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:09.352814  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:11.355781  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:13.854059  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:16.474998  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:18.852517  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:20.853140  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:22.860410  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:25.351044  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:27.352519  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:29.353385  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:31.853033  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:33.853174  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:36.352675  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:38.353003  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:40.852536  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:43.352451  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:45.353974  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:47.851579  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:49.851925  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:51.852394  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:54.353127  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:56.854768  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:59.352558  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:01.852323  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:03.853089  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:06.351327  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:08.851729  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:10.851932  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:12.852403  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:15.351858  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:17.352514  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:19.852312  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:22.350927  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:24.851724  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:26.852239  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:29.351942  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:31.353777  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:33.855532  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:36.351150  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:38.851533  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:40.852383  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:43.351986  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:45.852443  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:48.352584  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:50.851473  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:52.852006  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:54.852678  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:56.853216  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:59.352825  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:01.852195  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:04.352053  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:06.851841  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:08.852205  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:10.852278  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:13.352984  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:15.851578  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:17.852674  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:20.351699  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:22.352520  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:24.852141  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:27.352815  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:29.852332  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:32.352230  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:34.352785  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:36.851447  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:38.851602  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:40.851984  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:42.852354  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:45.352296  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:47.353206  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:49.852539  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:51.852784  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:53.853016  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:55.853717  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:58.352987  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:00.353937  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:02.851877  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:04.852811  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:06.853114  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:09.352075  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:11.353086  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:13.354174  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:15.852696  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:17.853196  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:20.351887  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:22.353331  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:24.852336  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:26.853167  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:28.857732  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:31.351986  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:33.352673  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:35.353740  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:37.852399  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:39.852764  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:42.352585  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:44.353035  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:46.353087  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:48.846560  532607 pod_ready.go:82] duration metric: took 4m0.000837349s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:48.846588  532607 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 12:35:48.846609  532607 pod_ready.go:39] duration metric: took 4m15.043496386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:48.846642  532607 kubeadm.go:597] duration metric: took 4m22.373102966s to restartPrimaryControlPlane
	W0127 12:35:48.846704  532607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:35:48.846732  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 12:35:51.040149  532607 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.193395005s)
	I0127 12:35:51.040242  532607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:35:51.059048  532607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:35:51.071298  532607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:35:51.083050  532607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:35:51.083071  532607 kubeadm.go:157] found existing configuration files:
	
	I0127 12:35:51.083125  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:35:51.095124  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:35:51.095208  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:35:51.109222  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:35:51.120314  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:35:51.120390  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:35:51.129841  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:35:51.138490  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:35:51.138545  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:35:51.148658  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:35:51.157842  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:35:51.157894  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:35:51.167146  532607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:35:51.220576  532607 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:35:51.220796  532607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:35:51.342653  532607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:35:51.342830  532607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:35:51.343020  532607 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:35:51.348865  532607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:35:51.351235  532607 out.go:235]   - Generating certificates and keys ...
	I0127 12:35:51.351355  532607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:35:51.351445  532607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:35:51.351549  532607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:35:51.351635  532607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:35:51.351728  532607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:35:51.351801  532607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:35:51.351908  532607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:35:51.352000  532607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:35:51.352111  532607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:35:51.352262  532607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:35:51.352422  532607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:35:51.352546  532607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:35:51.416524  532607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:35:51.666997  532607 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:35:51.867237  532607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:35:52.007584  532607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:35:52.100986  532607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:35:52.101889  532607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:35:52.105806  532607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:35:52.107605  532607 out.go:235]   - Booting up control plane ...
	I0127 12:35:52.107745  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:35:52.108083  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:35:52.109913  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:35:52.146307  532607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:35:52.156130  532607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:35:52.156211  532607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:35:52.316523  532607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:35:52.316653  532607 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:35:53.322303  532607 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005635238s
	I0127 12:35:53.322436  532607 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:35:58.324673  532607 kubeadm.go:310] [api-check] The API server is healthy after 5.002577765s
	I0127 12:35:58.341207  532607 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:35:58.354763  532607 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:35:58.376218  532607 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:35:58.376468  532607 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-346100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:35:58.389424  532607 kubeadm.go:310] [bootstrap-token] Using token: 5069a0.5f3g1pdxhpmrcoga
	I0127 12:35:58.390773  532607 out.go:235]   - Configuring RBAC rules ...
	I0127 12:35:58.390901  532607 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:35:58.397069  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:35:58.405069  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:35:58.409291  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:35:58.412914  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:35:58.415499  532607 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:35:58.732028  532607 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:35:59.154936  532607 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:35:59.732670  532607 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:35:59.734653  532607 kubeadm.go:310] 
	I0127 12:35:59.734754  532607 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:35:59.734788  532607 kubeadm.go:310] 
	I0127 12:35:59.734919  532607 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:35:59.734933  532607 kubeadm.go:310] 
	I0127 12:35:59.734978  532607 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:35:59.735094  532607 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:35:59.735193  532607 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:35:59.735206  532607 kubeadm.go:310] 
	I0127 12:35:59.735295  532607 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:35:59.735316  532607 kubeadm.go:310] 
	I0127 12:35:59.735384  532607 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:35:59.735392  532607 kubeadm.go:310] 
	I0127 12:35:59.735463  532607 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:35:59.735570  532607 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:35:59.735692  532607 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:35:59.735707  532607 kubeadm.go:310] 
	I0127 12:35:59.735853  532607 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:35:59.735964  532607 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:35:59.735986  532607 kubeadm.go:310] 
	I0127 12:35:59.736104  532607 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5069a0.5f3g1pdxhpmrcoga \
	I0127 12:35:59.736265  532607 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
	I0127 12:35:59.736299  532607 kubeadm.go:310] 	--control-plane 
	I0127 12:35:59.736312  532607 kubeadm.go:310] 
	I0127 12:35:59.736432  532607 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:35:59.736441  532607 kubeadm.go:310] 
	I0127 12:35:59.736583  532607 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5069a0.5f3g1pdxhpmrcoga \
	I0127 12:35:59.736761  532607 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 
	I0127 12:35:59.738118  532607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:35:59.738152  532607 cni.go:84] Creating CNI manager for ""
	I0127 12:35:59.738162  532607 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:35:59.739901  532607 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:35:59.741063  532607 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:35:59.759536  532607 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:35:59.777178  532607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:35:59.777199  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:59.777236  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-346100 minikube.k8s.io/updated_at=2025_01_27T12_35_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=embed-certs-346100 minikube.k8s.io/primary=true
	I0127 12:35:59.974092  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:59.974117  532607 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:00.474716  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:00.974693  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:01.474216  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:01.974205  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:02.475052  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:02.975120  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:03.474457  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:03.577041  532607 kubeadm.go:1113] duration metric: took 3.799909499s to wait for elevateKubeSystemPrivileges
	I0127 12:36:03.577092  532607 kubeadm.go:394] duration metric: took 4m37.171719699s to StartCluster
	I0127 12:36:03.577128  532607 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:03.577224  532607 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:03.579144  532607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:03.579423  532607 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.206 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:03.579505  532607 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:03.579620  532607 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-346100"
	I0127 12:36:03.579641  532607 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-346100"
	W0127 12:36:03.579650  532607 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:03.579651  532607 addons.go:69] Setting default-storageclass=true in profile "embed-certs-346100"
	I0127 12:36:03.579676  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.579688  532607 config.go:182] Loaded profile config "embed-certs-346100": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:03.579700  532607 addons.go:69] Setting dashboard=true in profile "embed-certs-346100"
	I0127 12:36:03.579723  532607 addons.go:238] Setting addon dashboard=true in "embed-certs-346100"
	I0127 12:36:03.579715  532607 addons.go:69] Setting metrics-server=true in profile "embed-certs-346100"
	W0127 12:36:03.579740  532607 addons.go:247] addon dashboard should already be in state true
	I0127 12:36:03.579694  532607 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-346100"
	I0127 12:36:03.579749  532607 addons.go:238] Setting addon metrics-server=true in "embed-certs-346100"
	W0127 12:36:03.579764  532607 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:03.579779  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.579800  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.580054  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580088  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580101  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580150  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580190  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580215  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580233  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580258  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.581024  532607 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:03.582429  532607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:03.598339  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0127 12:36:03.598375  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 12:36:03.598838  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.598892  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34331
	I0127 12:36:03.598919  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.599306  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.599470  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599486  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599497  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599511  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599722  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599738  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599912  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.599974  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.600223  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.600494  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600530  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600545  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600578  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600674  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600699  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600881  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0127 12:36:03.601524  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.602100  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.602116  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.602471  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.602687  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.606648  532607 addons.go:238] Setting addon default-storageclass=true in "embed-certs-346100"
	W0127 12:36:03.606677  532607 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:03.606709  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.607067  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.607104  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.619967  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0127 12:36:03.620348  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33043
	I0127 12:36:03.620623  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.620935  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.621427  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.621447  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.621789  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.621804  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.621998  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.622221  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.622273  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.622543  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.624486  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.624677  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.625420  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43503
	I0127 12:36:03.626112  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.626167  532607 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:03.626180  532607 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:03.626583  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.626602  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.626611  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0127 12:36:03.626942  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.627027  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.627437  532607 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:03.627453  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:03.627464  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.627467  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.627475  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.627504  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.627471  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.627836  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.628149  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.628561  532607 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:03.629535  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:03.629551  532607 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:03.629575  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.630434  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.631724  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.632213  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.632232  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.632448  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.632593  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.632682  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.632867  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.632996  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.633161  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.633189  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.633418  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.633573  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.633701  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.633812  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.634247  532607 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:03.635266  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:03.635284  532607 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:03.635305  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.637878  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.638306  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.638338  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.638542  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.638697  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.638867  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.639116  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.643537  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
	I0127 12:36:03.643881  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.644309  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.644327  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.644644  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.644952  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.646128  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.646325  532607 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:03.646341  532607 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:03.646358  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.649282  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.649641  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.649669  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.649910  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.650077  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.650198  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.650298  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.805663  532607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:03.824512  532607 node_ready.go:35] waiting up to 6m0s for node "embed-certs-346100" to be "Ready" ...
	I0127 12:36:03.856505  532607 node_ready.go:49] node "embed-certs-346100" has status "Ready":"True"
	I0127 12:36:03.856540  532607 node_ready.go:38] duration metric: took 31.977019ms for node "embed-certs-346100" to be "Ready" ...
	I0127 12:36:03.856555  532607 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:03.863683  532607 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:03.902624  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:03.925389  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:03.977654  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:03.977686  532607 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:04.012033  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:04.012063  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:04.029962  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:04.029991  532607 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:04.076532  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:04.076565  532607 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:04.136201  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:04.136229  532607 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:04.142268  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:04.142293  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:04.174895  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:04.174919  532607 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:04.185938  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:04.185959  532607 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:04.204606  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:04.226546  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:04.226574  532607 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:04.340411  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:04.340438  532607 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:04.424847  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.424878  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.425230  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:04.425269  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.425293  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.425304  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.425329  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.425596  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.425613  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.425627  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:04.443059  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.443080  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.443380  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.443404  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.457532  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:04.457557  532607 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:04.529771  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:04.529803  532607 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:04.581907  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:05.466462  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541011177s)
	I0127 12:36:05.466526  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.466544  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.466865  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.466934  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.466947  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.466957  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.466969  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.467283  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.467328  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.467300  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.677171  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.472522816s)
	I0127 12:36:05.677230  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.677244  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.677645  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.677684  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.677699  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.677711  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.677723  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.678056  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.678091  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.678115  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.678132  532607 addons.go:479] Verifying addon metrics-server=true in "embed-certs-346100"
	I0127 12:36:05.870203  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:06.503934  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.921960102s)
	I0127 12:36:06.504007  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:06.504025  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:06.504372  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:06.504489  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:06.504506  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:06.504514  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:06.504460  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:06.504814  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:06.504834  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:06.504835  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:06.506475  532607 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-346100 addons enable metrics-server
	
	I0127 12:36:06.507672  532607 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 12:36:06.508878  532607 addons.go:514] duration metric: took 2.929397312s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 12:36:08.374973  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:10.872073  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:12.874242  532607 pod_ready.go:93] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.874282  532607 pod_ready.go:82] duration metric: took 9.010574512s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.874303  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.882689  532607 pod_ready.go:93] pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.882775  532607 pod_ready.go:82] duration metric: took 8.462495ms for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.882801  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.888659  532607 pod_ready.go:93] pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.888693  532607 pod_ready.go:82] duration metric: took 5.874272ms for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.888707  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-smp6l" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.894080  532607 pod_ready.go:93] pod "kube-proxy-smp6l" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.894141  532607 pod_ready.go:82] duration metric: took 5.425838ms for pod "kube-proxy-smp6l" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.894163  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.900793  532607 pod_ready.go:93] pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.900849  532607 pod_ready.go:82] duration metric: took 6.668808ms for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.900869  532607 pod_ready.go:39] duration metric: took 9.044300135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:12.900904  532607 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:12.900995  532607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:12.922995  532607 api_server.go:72] duration metric: took 9.343524429s to wait for apiserver process to appear ...
	I0127 12:36:12.923066  532607 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:12.923097  532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
	I0127 12:36:12.930234  532607 api_server.go:279] https://192.168.50.206:8443/healthz returned 200:
	ok
	I0127 12:36:12.931482  532607 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:12.931504  532607 api_server.go:131] duration metric: took 8.421115ms to wait for apiserver health ...
	I0127 12:36:12.931513  532607 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:13.073659  532607 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:13.073701  532607 system_pods.go:61] "coredns-668d6bf9bc-46nfk" [ca146154-7693-43e5-ae2a-f0c3148327b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:13.073712  532607 system_pods.go:61] "coredns-668d6bf9bc-9p64b" [4d44d79e-ea3d-4085-9fb2-356746e71e9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:13.073722  532607 system_pods.go:61] "etcd-embed-certs-346100" [cb00782a-b078-43ee-aa3f-4806aa7629d6] Running
	I0127 12:36:13.073729  532607 system_pods.go:61] "kube-apiserver-embed-certs-346100" [7b0a8d77-4737-4bde-8e2a-2462c524f9a2] Running
	I0127 12:36:13.073735  532607 system_pods.go:61] "kube-controller-manager-embed-certs-346100" [196254b2-812b-43a4-ae10-d55a11957faf] Running
	I0127 12:36:13.073741  532607 system_pods.go:61] "kube-proxy-smp6l" [886c9cd4-795b-4e33-a16e-e12302c37665] Running
	I0127 12:36:13.073746  532607 system_pods.go:61] "kube-scheduler-embed-certs-346100" [90cbc1fe-52a3-45d8-a8e9-edc60f5c4829] Running
	I0127 12:36:13.073754  532607 system_pods.go:61] "metrics-server-f79f97bbb-w8fsn" [3a78ab43-37b0-4dc0-89a9-59a558ef997c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:13.073811  532607 system_pods.go:61] "storage-provisioner" [0d021617-8412-4f33-ba4f-2b3b458721ff] Running
	I0127 12:36:13.073828  532607 system_pods.go:74] duration metric: took 142.306493ms to wait for pod list to return data ...
	I0127 12:36:13.073848  532607 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:13.273298  532607 default_sa.go:45] found service account: "default"
	I0127 12:36:13.273415  532607 default_sa.go:55] duration metric: took 199.555226ms for default service account to be created ...
	I0127 12:36:13.273446  532607 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:36:13.477525  532607 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-346100 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-346100 -n embed-certs-346100
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-346100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-346100 logs -n 25: (1.167062304s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC | 27 Jan 25 12:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-215237                  | no-preload-215237            | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC | 27 Jan 25 12:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-215237                                   | no-preload-215237            | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-346100                 | embed-certs-346100           | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-346100                                  | embed-certs-346100           | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-887672       | default-k8s-diff-port-887672 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-887672 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC |                     |
	|         | default-k8s-diff-port-887672                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-858845             | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-858845 image                           | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	| delete  | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	| start   | -p newest-cni-610630 --memory=2200 --alsologtostderr   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-610630             | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-610630                  | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-610630 --memory=2200 --alsologtostderr   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:36 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-610630 image list                           | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	| delete  | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	| delete  | -p no-preload-215237                                   | no-preload-215237            | jenkins | v1.35.0 | 27 Jan 25 12:57 UTC | 27 Jan 25 12:57 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:35:43
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:35:43.059479  534894 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:35:43.059651  534894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:35:43.059664  534894 out.go:358] Setting ErrFile to fd 2...
	I0127 12:35:43.059671  534894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:35:43.059931  534894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 12:35:43.061091  534894 out.go:352] Setting JSON to false
	I0127 12:35:43.062772  534894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11886,"bootTime":1737969457,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:35:43.062914  534894 start.go:139] virtualization: kvm guest
	I0127 12:35:43.064927  534894 out.go:177] * [newest-cni-610630] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:35:43.066246  534894 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:35:43.066268  534894 notify.go:220] Checking for updates...
	I0127 12:35:43.068595  534894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:35:43.069716  534894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:35:43.070810  534894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 12:35:43.071853  534894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:35:43.072978  534894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:35:43.074838  534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:35:43.075450  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:43.075519  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:43.091909  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35965
	I0127 12:35:43.093149  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:43.093802  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:35:43.093834  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:43.094269  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:43.094579  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:35:43.094848  534894 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:35:43.095161  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:43.095202  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:43.110695  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0127 12:35:43.111212  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:43.111903  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:35:43.111935  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:43.112295  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:43.112533  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:35:43.153545  534894 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:35:40.799070  532344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:35:40.816802  532344 node_ready.go:35] waiting up to 6m0s for node "no-preload-215237" to be "Ready" ...
	I0127 12:35:40.842677  532344 node_ready.go:49] node "no-preload-215237" has status "Ready":"True"
	I0127 12:35:40.842703  532344 node_ready.go:38] duration metric: took 25.862086ms for node "no-preload-215237" to be "Ready" ...
	I0127 12:35:40.842716  532344 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:40.853263  532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:40.876376  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:35:40.876407  532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:35:40.898870  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:35:40.903314  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:35:40.916620  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:35:40.916649  532344 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:35:41.067992  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:35:41.068023  532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:35:41.072700  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:35:41.072728  532344 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:35:41.155398  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:35:41.155426  532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:35:41.194887  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:35:41.230877  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:35:41.230909  532344 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:35:41.313376  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:35:41.313400  532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:35:41.442010  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:35:41.442049  532344 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:35:41.486996  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:35:41.487028  532344 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:35:41.616020  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:35:41.616057  532344 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:35:41.690855  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:35:41.690886  532344 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:35:41.720821  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:35:41.720851  532344 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:35:41.754849  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:35:41.990168  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091255427s)
	I0127 12:35:41.990220  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086878371s)
	I0127 12:35:41.990249  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990262  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990249  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990370  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990668  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.990683  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.990719  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.990725  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.990733  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.990747  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990758  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990821  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.990734  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990857  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.991027  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.991042  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.992412  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.992462  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.992477  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.004951  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.004969  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.005238  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.005254  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.005271  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:42.472191  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.277235038s)
	I0127 12:35:42.472268  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.472283  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.472619  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:42.472665  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.472683  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.472697  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.472706  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.472985  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.473012  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.473024  532344 addons.go:479] Verifying addon metrics-server=true in "no-preload-215237"
	I0127 12:35:42.890307  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:43.165047  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.410145551s)
	I0127 12:35:43.165103  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:43.165123  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:43.165633  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:43.165657  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:43.165676  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:43.165692  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:43.165705  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:43.165941  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:43.165957  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:43.167364  532344 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-215237 addons enable metrics-server
	
	I0127 12:35:43.168535  532344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 12:35:43.154513  534894 start.go:297] selected driver: kvm2
	I0127 12:35:43.154531  534894 start.go:901] validating driver "kvm2" against &{Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:35:43.154653  534894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:35:43.155362  534894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:35:43.155469  534894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-471120/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:35:43.172617  534894 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:35:43.173026  534894 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 12:35:43.173063  534894 cni.go:84] Creating CNI manager for ""
	I0127 12:35:43.173110  534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:35:43.173145  534894 start.go:340] cluster config:
	{Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:35:43.173269  534894 iso.go:125] acquiring lock: {Name:mkc6ca3cbb5528e67f6dc9da0188f358e9fee620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:35:43.174747  534894 out.go:177] * Starting "newest-cni-610630" primary control-plane node in "newest-cni-610630" cluster
	I0127 12:35:43.175803  534894 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:35:43.175846  534894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 12:35:43.175857  534894 cache.go:56] Caching tarball of preloaded images
	I0127 12:35:43.175957  534894 preload.go:172] Found /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:35:43.175970  534894 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 12:35:43.176077  534894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/config.json ...
	I0127 12:35:43.176271  534894 start.go:360] acquireMachinesLock for newest-cni-610630: {Name:mk818835aef0de701295cc2c98fea95e1be33202 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:35:43.176324  534894 start.go:364] duration metric: took 32.573µs to acquireMachinesLock for "newest-cni-610630"
	I0127 12:35:43.176345  534894 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:35:43.176356  534894 fix.go:54] fixHost starting: 
	I0127 12:35:43.176686  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:43.176750  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:43.191549  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37401
	I0127 12:35:43.191935  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:43.192419  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:35:43.192448  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:43.192934  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:43.193138  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:35:43.193300  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:35:43.195116  534894 fix.go:112] recreateIfNeeded on newest-cni-610630: state=Stopped err=<nil>
	I0127 12:35:43.195141  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	W0127 12:35:43.195320  534894 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:35:43.196456  534894 out.go:177] * Restarting existing kvm2 VM for "newest-cni-610630" ...
	I0127 12:35:43.169652  532344 addons.go:514] duration metric: took 2.587685868s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 12:35:45.359702  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:42.352585  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:44.353035  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:46.353087  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:44.707430  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:46.708896  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:43.197457  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Start
	I0127 12:35:43.197621  534894 main.go:141] libmachine: (newest-cni-610630) starting domain...
	I0127 12:35:43.197646  534894 main.go:141] libmachine: (newest-cni-610630) ensuring networks are active...
	I0127 12:35:43.198412  534894 main.go:141] libmachine: (newest-cni-610630) Ensuring network default is active
	I0127 12:35:43.198762  534894 main.go:141] libmachine: (newest-cni-610630) Ensuring network mk-newest-cni-610630 is active
	I0127 12:35:43.199182  534894 main.go:141] libmachine: (newest-cni-610630) getting domain XML...
	I0127 12:35:43.199981  534894 main.go:141] libmachine: (newest-cni-610630) creating domain...
	I0127 12:35:44.514338  534894 main.go:141] libmachine: (newest-cni-610630) waiting for IP...
	I0127 12:35:44.515307  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:44.515803  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:44.515875  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:44.515771  534929 retry.go:31] will retry after 248.83242ms: waiting for domain to come up
	I0127 12:35:44.766511  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:44.767046  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:44.767081  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:44.767011  534929 retry.go:31] will retry after 381.268975ms: waiting for domain to come up
	I0127 12:35:45.149680  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:45.150281  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:45.150314  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:45.150226  534929 retry.go:31] will retry after 435.74049ms: waiting for domain to come up
	I0127 12:35:45.587978  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:45.588682  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:45.588719  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:45.588634  534929 retry.go:31] will retry after 577.775914ms: waiting for domain to come up
	I0127 12:35:46.168596  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:46.169297  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:46.169332  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:46.169238  534929 retry.go:31] will retry after 539.718923ms: waiting for domain to come up
	I0127 12:35:46.711082  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:46.711652  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:46.711676  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:46.711635  534929 retry.go:31] will retry after 607.430128ms: waiting for domain to come up
	I0127 12:35:47.320403  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:47.320941  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:47.321006  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:47.320921  534929 retry.go:31] will retry after 772.973348ms: waiting for domain to come up
	I0127 12:35:46.359497  532344 pod_ready.go:93] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:46.359531  532344 pod_ready.go:82] duration metric: took 5.506181911s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:46.359547  532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.867744  532344 pod_ready.go:93] pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.867773  532344 pod_ready.go:82] duration metric: took 1.508215371s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.867785  532344 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.872748  532344 pod_ready.go:93] pod "etcd-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.872769  532344 pod_ready.go:82] duration metric: took 4.975217ms for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.872782  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.879135  532344 pod_ready.go:93] pod "kube-apiserver-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.879153  532344 pod_ready.go:82] duration metric: took 6.364009ms for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.879170  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.884792  532344 pod_ready.go:93] pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.884809  532344 pod_ready.go:82] duration metric: took 5.632068ms for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.884817  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.957535  532344 pod_ready.go:93] pod "kube-proxy-bbnm2" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.957564  532344 pod_ready.go:82] duration metric: took 72.739132ms for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.957577  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:48.358062  532344 pod_ready.go:93] pod "kube-scheduler-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:48.358087  532344 pod_ready.go:82] duration metric: took 400.502078ms for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:48.358095  532344 pod_ready.go:39] duration metric: took 7.515367235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:48.358124  532344 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:35:48.358180  532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:48.381657  532344 api_server.go:72] duration metric: took 7.799751759s to wait for apiserver process to appear ...
	I0127 12:35:48.381684  532344 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:35:48.381704  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:35:48.387590  532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 200:
	ok
	I0127 12:35:48.388765  532344 api_server.go:141] control plane version: v1.32.1
	I0127 12:35:48.388787  532344 api_server.go:131] duration metric: took 7.09706ms to wait for apiserver health ...
	I0127 12:35:48.388795  532344 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:35:48.560605  532344 system_pods.go:59] 9 kube-system pods found
	I0127 12:35:48.560642  532344 system_pods.go:61] "coredns-668d6bf9bc-v9stn" [011e6981-39d0-4fa1-bf1b-3d1e06c7c71a] Running
	I0127 12:35:48.560650  532344 system_pods.go:61] "coredns-668d6bf9bc-wwb9p" [0a034560-980a-40fb-9603-be18d02b6f05] Running
	I0127 12:35:48.560656  532344 system_pods.go:61] "etcd-no-preload-215237" [8b9ab7f2-224f-4373-9dc2-fa794a60d922] Running
	I0127 12:35:48.560659  532344 system_pods.go:61] "kube-apiserver-no-preload-215237" [064e0d8e-5d82-42bb-979d-cd0e9aa13f56] Running
	I0127 12:35:48.560663  532344 system_pods.go:61] "kube-controller-manager-no-preload-215237" [dd9c190f-c01e-4fa7-b033-57463b032d30] Running
	I0127 12:35:48.560666  532344 system_pods.go:61] "kube-proxy-bbnm2" [dd89ae69-6ad2-44cb-9c80-ba5529e22dc1] Running
	I0127 12:35:48.560671  532344 system_pods.go:61] "kube-scheduler-no-preload-215237" [41c25fba-7af8-4e0e-b96d-57be786d703c] Running
	I0127 12:35:48.560680  532344 system_pods.go:61] "metrics-server-f79f97bbb-lqck5" [3447c2da-cbb0-412c-a8d9-2be32c8e6dad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:35:48.560686  532344 system_pods.go:61] "storage-provisioner" [9627d136-2ecb-4cc3-969d-b62de2261147] Running
	I0127 12:35:48.560696  532344 system_pods.go:74] duration metric: took 171.894881ms to wait for pod list to return data ...
	I0127 12:35:48.560709  532344 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:35:48.760164  532344 default_sa.go:45] found service account: "default"
	I0127 12:35:48.760270  532344 default_sa.go:55] duration metric: took 199.548191ms for default service account to be created ...
	I0127 12:35:48.760295  532344 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:35:48.961828  532344 system_pods.go:87] 9 kube-system pods found
	I0127 12:35:48.846560  532607 pod_ready.go:82] duration metric: took 4m0.000837349s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:48.846588  532607 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 12:35:48.846609  532607 pod_ready.go:39] duration metric: took 4m15.043496386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:48.846642  532607 kubeadm.go:597] duration metric: took 4m22.373102966s to restartPrimaryControlPlane
	W0127 12:35:48.846704  532607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:35:48.846732  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 12:35:51.040149  532607 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.193395005s)
	I0127 12:35:51.040242  532607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:35:51.059048  532607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:35:51.071298  532607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:35:51.083050  532607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:35:51.083071  532607 kubeadm.go:157] found existing configuration files:
	
	I0127 12:35:51.083125  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:35:51.095124  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:35:51.095208  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:35:51.109222  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:35:51.120314  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:35:51.120390  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:35:51.129841  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:35:51.138490  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:35:51.138545  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:35:51.148658  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:35:51.157842  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:35:51.157894  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:35:51.167146  532607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:35:51.220576  532607 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:35:51.220796  532607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:35:51.342653  532607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:35:51.342830  532607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:35:51.343020  532607 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:35:51.348865  532607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:35:51.351235  532607 out.go:235]   - Generating certificates and keys ...
	I0127 12:35:51.351355  532607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:35:51.351445  532607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:35:51.351549  532607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:35:51.351635  532607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:35:51.351728  532607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:35:51.351801  532607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:35:51.351908  532607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:35:51.352000  532607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:35:51.352111  532607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:35:51.352262  532607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:35:51.352422  532607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:35:51.352546  532607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:35:51.416524  532607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:35:51.666997  532607 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:35:51.867237  532607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:35:52.007584  532607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:35:52.100986  532607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:35:52.101889  532607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:35:52.105806  532607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:35:52.107605  532607 out.go:235]   - Booting up control plane ...
	I0127 12:35:52.107745  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:35:52.108083  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:35:52.109913  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:35:52.146307  532607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:35:52.156130  532607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:35:52.156211  532607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:35:52.316523  532607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:35:52.316653  532607 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:35:48.711637  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:51.208760  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:48.096119  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:48.096791  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:48.096823  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:48.096728  534929 retry.go:31] will retry after 1.301268199s: waiting for domain to come up
	I0127 12:35:49.400077  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:49.400697  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:49.400729  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:49.400664  534929 retry.go:31] will retry after 1.62599798s: waiting for domain to come up
	I0127 12:35:51.029156  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:51.029715  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:51.029746  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:51.029706  534929 retry.go:31] will retry after 1.477748588s: waiting for domain to come up
	I0127 12:35:52.509484  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:52.510252  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:52.510299  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:52.510150  534929 retry.go:31] will retry after 1.875473187s: waiting for domain to come up
	I0127 12:35:53.322303  532607 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005635238s
	I0127 12:35:53.322436  532607 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:35:53.708069  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:56.209743  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:54.387170  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:54.387808  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:54.387840  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:54.387764  534929 retry.go:31] will retry after 2.219284161s: waiting for domain to come up
	I0127 12:35:56.609666  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:56.610140  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:56.610163  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:56.610112  534929 retry.go:31] will retry after 3.124115638s: waiting for domain to come up
	I0127 12:35:58.324673  532607 kubeadm.go:310] [api-check] The API server is healthy after 5.002577765s
	I0127 12:35:58.341207  532607 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:35:58.354763  532607 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:35:58.376218  532607 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:35:58.376468  532607 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-346100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:35:58.389424  532607 kubeadm.go:310] [bootstrap-token] Using token: 5069a0.5f3g1pdxhpmrcoga
	I0127 12:35:58.390773  532607 out.go:235]   - Configuring RBAC rules ...
	I0127 12:35:58.390901  532607 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:35:58.397069  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:35:58.405069  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:35:58.409291  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:35:58.412914  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:35:58.415499  532607 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:35:58.732028  532607 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:35:59.154936  532607 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:35:59.732670  532607 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:35:59.734653  532607 kubeadm.go:310] 
	I0127 12:35:59.734754  532607 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:35:59.734788  532607 kubeadm.go:310] 
	I0127 12:35:59.734919  532607 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:35:59.734933  532607 kubeadm.go:310] 
	I0127 12:35:59.734978  532607 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:35:59.735094  532607 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:35:59.735193  532607 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:35:59.735206  532607 kubeadm.go:310] 
	I0127 12:35:59.735295  532607 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:35:59.735316  532607 kubeadm.go:310] 
	I0127 12:35:59.735384  532607 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:35:59.735392  532607 kubeadm.go:310] 
	I0127 12:35:59.735463  532607 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:35:59.735570  532607 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:35:59.735692  532607 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:35:59.735707  532607 kubeadm.go:310] 
	I0127 12:35:59.735853  532607 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:35:59.735964  532607 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:35:59.735986  532607 kubeadm.go:310] 
	I0127 12:35:59.736104  532607 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5069a0.5f3g1pdxhpmrcoga \
	I0127 12:35:59.736265  532607 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
	I0127 12:35:59.736299  532607 kubeadm.go:310] 	--control-plane 
	I0127 12:35:59.736312  532607 kubeadm.go:310] 
	I0127 12:35:59.736432  532607 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:35:59.736441  532607 kubeadm.go:310] 
	I0127 12:35:59.736583  532607 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5069a0.5f3g1pdxhpmrcoga \
	I0127 12:35:59.736761  532607 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 
	I0127 12:35:59.738118  532607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:35:59.738152  532607 cni.go:84] Creating CNI manager for ""
	I0127 12:35:59.738162  532607 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:35:59.739901  532607 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:35:59.741063  532607 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:35:59.759536  532607 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:35:59.777178  532607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:35:59.777199  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:59.777236  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-346100 minikube.k8s.io/updated_at=2025_01_27T12_35_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=embed-certs-346100 minikube.k8s.io/primary=true
	I0127 12:35:59.974092  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:59.974117  532607 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:00.474716  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:00.974693  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:01.474216  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:01.974205  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:58.707466  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:01.206257  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:59.736004  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:59.736626  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:59.736649  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:59.736597  534929 retry.go:31] will retry after 3.849528984s: waiting for domain to come up
	I0127 12:36:02.475052  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:02.975120  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:03.474457  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:03.577041  532607 kubeadm.go:1113] duration metric: took 3.799909499s to wait for elevateKubeSystemPrivileges
	I0127 12:36:03.577092  532607 kubeadm.go:394] duration metric: took 4m37.171719699s to StartCluster
	I0127 12:36:03.577128  532607 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:03.577224  532607 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:03.579144  532607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:03.579423  532607 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.206 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:03.579505  532607 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:03.579620  532607 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-346100"
	I0127 12:36:03.579641  532607 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-346100"
	W0127 12:36:03.579650  532607 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:03.579651  532607 addons.go:69] Setting default-storageclass=true in profile "embed-certs-346100"
	I0127 12:36:03.579676  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.579688  532607 config.go:182] Loaded profile config "embed-certs-346100": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:03.579700  532607 addons.go:69] Setting dashboard=true in profile "embed-certs-346100"
	I0127 12:36:03.579723  532607 addons.go:238] Setting addon dashboard=true in "embed-certs-346100"
	I0127 12:36:03.579715  532607 addons.go:69] Setting metrics-server=true in profile "embed-certs-346100"
	W0127 12:36:03.579740  532607 addons.go:247] addon dashboard should already be in state true
	I0127 12:36:03.579694  532607 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-346100"
	I0127 12:36:03.579749  532607 addons.go:238] Setting addon metrics-server=true in "embed-certs-346100"
	W0127 12:36:03.579764  532607 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:03.579779  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.579800  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.580054  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580088  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580101  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580150  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580190  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580215  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580233  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580258  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.581024  532607 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:03.582429  532607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:03.598339  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0127 12:36:03.598375  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 12:36:03.598838  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.598892  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34331
	I0127 12:36:03.598919  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.599306  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.599470  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599486  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599497  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599511  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599722  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599738  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599912  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.599974  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.600223  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.600494  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600530  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600545  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600578  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600674  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600699  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600881  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0127 12:36:03.601524  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.602100  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.602116  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.602471  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.602687  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.606648  532607 addons.go:238] Setting addon default-storageclass=true in "embed-certs-346100"
	W0127 12:36:03.606677  532607 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:03.606709  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.607067  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.607104  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.619967  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0127 12:36:03.620348  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33043
	I0127 12:36:03.620623  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.620935  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.621427  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.621447  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.621789  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.621804  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.621998  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.622221  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.622273  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.622543  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.624486  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.624677  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.625420  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43503
	I0127 12:36:03.626112  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.626167  532607 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:03.626180  532607 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:03.626583  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.626602  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.626611  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0127 12:36:03.626942  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.627027  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.627437  532607 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:03.627453  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:03.627464  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.627467  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.627475  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.627504  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.627471  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.627836  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.628149  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.628561  532607 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:03.629535  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:03.629551  532607 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:03.629575  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.630434  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.631724  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.632213  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.632232  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.632448  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.632593  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.632682  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.632867  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.632996  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.633161  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.633189  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.633418  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.633573  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.633701  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.633812  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.634247  532607 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:03.635266  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:03.635284  532607 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:03.635305  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.637878  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.638306  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.638338  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.638542  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.638697  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.638867  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.639116  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.643537  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
	I0127 12:36:03.643881  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.644309  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.644327  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.644644  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.644952  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.646128  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.646325  532607 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:03.646341  532607 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:03.646358  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.649282  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.649641  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.649669  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.649910  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.650077  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.650198  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.650298  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.805663  532607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:03.824512  532607 node_ready.go:35] waiting up to 6m0s for node "embed-certs-346100" to be "Ready" ...
	I0127 12:36:03.856505  532607 node_ready.go:49] node "embed-certs-346100" has status "Ready":"True"
	I0127 12:36:03.856540  532607 node_ready.go:38] duration metric: took 31.977019ms for node "embed-certs-346100" to be "Ready" ...
	I0127 12:36:03.856555  532607 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:03.863683  532607 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:03.902624  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:03.925389  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:03.977654  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:03.977686  532607 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:04.012033  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:04.012063  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:04.029962  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:04.029991  532607 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:04.076532  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:04.076565  532607 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:04.136201  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:04.136229  532607 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:04.142268  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:04.142293  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:04.174895  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:04.174919  532607 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:04.185938  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:04.185959  532607 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:04.204606  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:04.226546  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:04.226574  532607 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:04.340411  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:04.340438  532607 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:04.424847  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.424878  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.425230  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:04.425269  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.425293  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.425304  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.425329  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.425596  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.425613  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.425627  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:04.443059  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.443080  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.443380  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.443404  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.457532  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:04.457557  532607 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:04.529771  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:04.529803  532607 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:04.581907  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:05.466462  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541011177s)
	I0127 12:36:05.466526  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.466544  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.466865  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.466934  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.466947  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.466957  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.466969  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.467283  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.467328  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.467300  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.677171  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.472522816s)
	I0127 12:36:05.677230  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.677244  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.677645  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.677684  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.677699  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.677711  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.677723  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.678056  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.678091  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.678115  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.678132  532607 addons.go:479] Verifying addon metrics-server=true in "embed-certs-346100"
	I0127 12:36:05.870203  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:06.503934  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.921960102s)
	I0127 12:36:06.504007  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:06.504025  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:06.504372  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:06.504489  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:06.504506  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:06.504514  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:06.504460  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:06.504814  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:06.504834  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:06.504835  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:06.506475  532607 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-346100 addons enable metrics-server
	
	I0127 12:36:06.507672  532607 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 12:36:06.508878  532607 addons.go:514] duration metric: took 2.929397312s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 12:36:03.587872  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.588437  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has current primary IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.588458  534894 main.go:141] libmachine: (newest-cni-610630) found domain IP: 192.168.39.228
	I0127 12:36:03.588471  534894 main.go:141] libmachine: (newest-cni-610630) reserving static IP address...
	I0127 12:36:03.589076  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "newest-cni-610630", mac: "52:54:00:49:61:34", ip: "192.168.39.228"} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.589105  534894 main.go:141] libmachine: (newest-cni-610630) reserved static IP address 192.168.39.228 for domain newest-cni-610630
	I0127 12:36:03.589131  534894 main.go:141] libmachine: (newest-cni-610630) DBG | skip adding static IP to network mk-newest-cni-610630 - found existing host DHCP lease matching {name: "newest-cni-610630", mac: "52:54:00:49:61:34", ip: "192.168.39.228"}
	I0127 12:36:03.589141  534894 main.go:141] libmachine: (newest-cni-610630) waiting for SSH...
	I0127 12:36:03.589165  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Getting to WaitForSSH function...
	I0127 12:36:03.592182  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.592771  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.592796  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.593171  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Using SSH client type: external
	I0127 12:36:03.593190  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa (-rw-------)
	I0127 12:36:03.593218  534894 main.go:141] libmachine: (newest-cni-610630) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:36:03.593228  534894 main.go:141] libmachine: (newest-cni-610630) DBG | About to run SSH command:
	I0127 12:36:03.593239  534894 main.go:141] libmachine: (newest-cni-610630) DBG | exit 0
	I0127 12:36:03.733183  534894 main.go:141] libmachine: (newest-cni-610630) DBG | SSH cmd err, output: <nil>: 
	I0127 12:36:03.733566  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetConfigRaw
	I0127 12:36:03.734338  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:03.737083  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.737511  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.737553  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.737875  534894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/config.json ...
	I0127 12:36:03.738075  534894 machine.go:93] provisionDockerMachine start ...
	I0127 12:36:03.738099  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:03.738370  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:03.741025  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.741354  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.741384  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.741566  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:03.741756  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.741966  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.742141  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:03.742356  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:03.742588  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:03.742604  534894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:36:03.853610  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:36:03.853641  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
	I0127 12:36:03.853921  534894 buildroot.go:166] provisioning hostname "newest-cni-610630"
	I0127 12:36:03.853957  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
	I0127 12:36:03.854185  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:03.857441  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.857928  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.857961  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.858074  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:03.858293  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.858504  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.858678  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:03.858886  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:03.859093  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:03.859120  534894 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-610630 && echo "newest-cni-610630" | sudo tee /etc/hostname
	I0127 12:36:03.986908  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-610630
	
	I0127 12:36:03.986946  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:03.990070  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.990587  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.990628  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.990879  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:03.991122  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.991299  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.991452  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:03.991678  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:03.991897  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:03.991926  534894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-610630' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-610630/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-610630' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:36:04.113288  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:36:04.113333  534894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-471120/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-471120/.minikube}
	I0127 12:36:04.113360  534894 buildroot.go:174] setting up certificates
	I0127 12:36:04.113382  534894 provision.go:84] configureAuth start
	I0127 12:36:04.113398  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
	I0127 12:36:04.113676  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:04.116365  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.116714  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.116764  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.117068  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.119378  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.119713  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.119736  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.119918  534894 provision.go:143] copyHostCerts
	I0127 12:36:04.119990  534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem, removing ...
	I0127 12:36:04.120016  534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem
	I0127 12:36:04.120102  534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem (1082 bytes)
	I0127 12:36:04.120256  534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem, removing ...
	I0127 12:36:04.120274  534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem
	I0127 12:36:04.120316  534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem (1123 bytes)
	I0127 12:36:04.120402  534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem, removing ...
	I0127 12:36:04.120415  534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem
	I0127 12:36:04.120457  534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem (1679 bytes)
	I0127 12:36:04.120535  534894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem org=jenkins.newest-cni-610630 san=[127.0.0.1 192.168.39.228 localhost minikube newest-cni-610630]
	I0127 12:36:04.308578  534894 provision.go:177] copyRemoteCerts
	I0127 12:36:04.308646  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:36:04.308681  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.311740  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.312147  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.312181  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.312367  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.312539  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.312718  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.312951  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.406421  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:36:04.434493  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:36:04.458820  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 12:36:04.483270  534894 provision.go:87] duration metric: took 369.872198ms to configureAuth
	I0127 12:36:04.483307  534894 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:36:04.483583  534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:04.483608  534894 machine.go:96] duration metric: took 745.518388ms to provisionDockerMachine
	I0127 12:36:04.483622  534894 start.go:293] postStartSetup for "newest-cni-610630" (driver="kvm2")
	I0127 12:36:04.483638  534894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:36:04.483676  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.484046  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:36:04.484091  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.487237  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.487689  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.487724  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.487930  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.488140  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.488365  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.488527  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.578283  534894 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:36:04.583274  534894 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:36:04.583302  534894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/addons for local assets ...
	I0127 12:36:04.583381  534894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/files for local assets ...
	I0127 12:36:04.583480  534894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem -> 4783872.pem in /etc/ssl/certs
	I0127 12:36:04.583597  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:36:04.594213  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:36:04.618506  534894 start.go:296] duration metric: took 134.861455ms for postStartSetup
	I0127 12:36:04.618569  534894 fix.go:56] duration metric: took 21.442212309s for fixHost
	I0127 12:36:04.618601  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.621910  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.622352  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.622388  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.622670  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.622872  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.623064  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.623231  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.623434  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:04.623683  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:04.623701  534894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:36:04.745637  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981364.720376969
	
	I0127 12:36:04.745668  534894 fix.go:216] guest clock: 1737981364.720376969
	I0127 12:36:04.745677  534894 fix.go:229] Guest: 2025-01-27 12:36:04.720376969 +0000 UTC Remote: 2025-01-27 12:36:04.618576525 +0000 UTC m=+21.609424923 (delta=101.800444ms)
	I0127 12:36:04.745704  534894 fix.go:200] guest clock delta is within tolerance: 101.800444ms
	I0127 12:36:04.745711  534894 start.go:83] releasing machines lock for "newest-cni-610630", held for 21.569374077s
	I0127 12:36:04.745742  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.746064  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:04.749116  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.749586  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.749623  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.749762  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.750369  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.750591  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.750714  534894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:36:04.750788  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.750841  534894 ssh_runner.go:195] Run: cat /version.json
	I0127 12:36:04.750872  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.753604  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.753937  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.753995  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.754036  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.754117  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.754283  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.754435  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.754463  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.754505  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.754649  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.754824  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.754704  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.754972  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.755165  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.837766  534894 ssh_runner.go:195] Run: systemctl --version
	I0127 12:36:04.870922  534894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:36:04.877067  534894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:36:04.877148  534894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:36:04.898288  534894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:36:04.898318  534894 start.go:495] detecting cgroup driver to use...
	I0127 12:36:04.898407  534894 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:36:04.932879  534894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:36:04.949987  534894 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:36:04.950133  534894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:36:04.967044  534894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:36:04.983091  534894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:36:05.124492  534894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:36:05.268901  534894 docker.go:233] disabling docker service ...
	I0127 12:36:05.268987  534894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:36:05.284320  534894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:36:05.298992  534894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:36:05.441228  534894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:36:05.609452  534894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:36:05.626916  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:36:05.647205  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:36:05.657704  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:36:05.667476  534894 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:36:05.667555  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:36:05.677468  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:36:05.688601  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:36:05.698702  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:36:05.710663  534894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:36:05.724221  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:36:05.737093  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:36:05.746742  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:36:05.756481  534894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:36:05.767282  534894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:36:05.767344  534894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:36:05.780026  534894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:36:05.791098  534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:05.930676  534894 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:36:05.966221  534894 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 12:36:05.966321  534894 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:36:05.971094  534894 retry.go:31] will retry after 1.421722911s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 12:36:07.393037  534894 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:36:07.398456  534894 start.go:563] Will wait 60s for crictl version
	I0127 12:36:07.398530  534894 ssh_runner.go:195] Run: which crictl
	I0127 12:36:07.402351  534894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:36:07.446224  534894 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 12:36:07.446301  534894 ssh_runner.go:195] Run: containerd --version
	I0127 12:36:07.473080  534894 ssh_runner.go:195] Run: containerd --version
	I0127 12:36:07.497663  534894 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 12:36:07.498857  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:07.501622  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:07.502032  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:07.502071  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:07.502274  534894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 12:36:07.506028  534894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:36:07.519964  534894 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 12:36:03.206663  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:05.207472  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:07.706605  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:07.521255  534894 kubeadm.go:883] updating cluster {Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:36:07.521413  534894 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:36:07.521493  534894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:36:07.554098  534894 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:36:07.554125  534894 containerd.go:534] Images already preloaded, skipping extraction
	I0127 12:36:07.554187  534894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:36:07.591861  534894 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:36:07.591890  534894 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:36:07.591901  534894 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.32.1 containerd true true} ...
	I0127 12:36:07.592033  534894 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-610630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:36:07.592107  534894 ssh_runner.go:195] Run: sudo crictl info
	I0127 12:36:07.633013  534894 cni.go:84] Creating CNI manager for ""
	I0127 12:36:07.633040  534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:36:07.633051  534894 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 12:36:07.633082  534894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-610630 NodeName:newest-cni-610630 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:36:07.633263  534894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-610630"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.228"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:36:07.633336  534894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:36:07.643906  534894 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:36:07.643972  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:36:07.653399  534894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0127 12:36:07.671016  534894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:36:07.691229  534894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2311 bytes)
	I0127 12:36:07.711891  534894 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I0127 12:36:07.716614  534894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:36:07.730520  534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:07.852685  534894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:07.870469  534894 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630 for IP: 192.168.39.228
	I0127 12:36:07.870498  534894 certs.go:194] generating shared ca certs ...
	I0127 12:36:07.870523  534894 certs.go:226] acquiring lock for ca certs: {Name:mk02d117412837bd489768267e2b174e6c3ff6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:07.870697  534894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key
	I0127 12:36:07.870773  534894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key
	I0127 12:36:07.870785  534894 certs.go:256] generating profile certs ...
	I0127 12:36:07.870943  534894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/client.key
	I0127 12:36:07.871073  534894 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.key.2ce4e80e
	I0127 12:36:07.871140  534894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.key
	I0127 12:36:07.871291  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem (1338 bytes)
	W0127 12:36:07.871334  534894 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387_empty.pem, impossibly tiny 0 bytes
	I0127 12:36:07.871349  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:36:07.871394  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:36:07.871429  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:36:07.871461  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem (1679 bytes)
	I0127 12:36:07.871519  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:36:07.872415  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:36:07.904294  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:36:07.944289  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:36:07.979498  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:36:08.010836  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 12:36:08.041389  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:36:08.201622  532844 pod_ready.go:82] duration metric: took 4m0.001032286s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" ...
	E0127 12:36:08.201658  532844 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 12:36:08.201683  532844 pod_ready.go:39] duration metric: took 4m14.040174083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:08.201724  532844 kubeadm.go:597] duration metric: took 4m21.555444284s to restartPrimaryControlPlane
	W0127 12:36:08.201798  532844 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:36:08.201833  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 12:36:10.133466  532844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.93160232s)
	I0127 12:36:10.133550  532844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:36:10.155296  532844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:36:10.170023  532844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:36:10.183165  532844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:36:10.183194  532844 kubeadm.go:157] found existing configuration files:
	
	I0127 12:36:10.183257  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 12:36:10.195175  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:36:10.195253  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:36:10.208349  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 12:36:10.220351  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:36:10.220429  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:36:10.238914  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 12:36:10.254995  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:36:10.255067  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:36:10.266753  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 12:36:10.278422  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:36:10.278490  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:36:10.292279  532844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:36:10.351007  532844 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:36:10.351189  532844 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:36:10.469769  532844 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:36:10.469949  532844 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:36:10.470056  532844 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:36:10.479353  532844 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:36:10.481858  532844 out.go:235]   - Generating certificates and keys ...
	I0127 12:36:10.481959  532844 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:36:10.482038  532844 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:36:10.482135  532844 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:36:10.482236  532844 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:36:10.482358  532844 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:36:10.482442  532844 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:36:10.482525  532844 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:36:10.482633  532844 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:36:10.483039  532844 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:36:10.483619  532844 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:36:10.483746  532844 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:36:10.483829  532844 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:36:10.585561  532844 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:36:10.784195  532844 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:36:10.958020  532844 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:36:11.223196  532844 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:36:11.439416  532844 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:36:11.440271  532844 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:36:11.444236  532844 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:36:08.374973  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:10.872073  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:11.445766  532844 out.go:235]   - Booting up control plane ...
	I0127 12:36:11.445895  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:36:11.445993  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:36:11.447764  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:36:11.484418  532844 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:36:11.496508  532844 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:36:11.496594  532844 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:36:11.681886  532844 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:36:11.682039  532844 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:36:12.183183  532844 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.076889ms
	I0127 12:36:12.183305  532844 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:36:08.074441  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:36:08.107699  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:36:08.137950  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem --> /usr/share/ca-certificates/478387.pem (1338 bytes)
	I0127 12:36:08.163896  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /usr/share/ca-certificates/4783872.pem (1708 bytes)
	I0127 12:36:08.188493  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:36:08.217196  534894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:36:08.237633  534894 ssh_runner.go:195] Run: openssl version
	I0127 12:36:08.244270  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478387.pem && ln -fs /usr/share/ca-certificates/478387.pem /etc/ssl/certs/478387.pem"
	I0127 12:36:08.258544  534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478387.pem
	I0127 12:36:08.264117  534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:28 /usr/share/ca-certificates/478387.pem
	I0127 12:36:08.264194  534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478387.pem
	I0127 12:36:08.271823  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478387.pem /etc/ssl/certs/51391683.0"
	I0127 12:36:08.283160  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783872.pem && ln -fs /usr/share/ca-certificates/4783872.pem /etc/ssl/certs/4783872.pem"
	I0127 12:36:08.293600  534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783872.pem
	I0127 12:36:08.299046  534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:28 /usr/share/ca-certificates/4783872.pem
	I0127 12:36:08.299115  534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783872.pem
	I0127 12:36:08.306015  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783872.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:36:08.317692  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:36:08.328317  534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:36:08.332856  534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:36:08.332912  534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:36:08.342875  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:36:08.355240  534894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:36:08.363234  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:36:08.369655  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:36:08.377149  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:36:08.382739  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:36:08.388277  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:36:08.395644  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:36:08.403226  534894 kubeadm.go:392] StartCluster: {Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:36:08.403325  534894 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 12:36:08.403369  534894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:36:08.454071  534894 cri.go:89] found id: "05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2"
	I0127 12:36:08.454100  534894 cri.go:89] found id: "a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250"
	I0127 12:36:08.454104  534894 cri.go:89] found id: "357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271"
	I0127 12:36:08.454108  534894 cri.go:89] found id: "8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b"
	I0127 12:36:08.454118  534894 cri.go:89] found id: "631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a"
	I0127 12:36:08.454123  534894 cri.go:89] found id: "3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13"
	I0127 12:36:08.454127  534894 cri.go:89] found id: "ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76"
	I0127 12:36:08.454130  534894 cri.go:89] found id: "0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c"
	I0127 12:36:08.454134  534894 cri.go:89] found id: ""
	I0127 12:36:08.454198  534894 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 12:36:08.472428  534894 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T12:36:08Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 12:36:08.472525  534894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:36:08.484156  534894 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:36:08.484183  534894 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:36:08.484255  534894 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:36:08.494975  534894 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:36:08.496360  534894 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-610630" does not appear in /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:08.497417  534894 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-471120/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-610630" cluster setting kubeconfig missing "newest-cni-610630" context setting]
	I0127 12:36:08.498843  534894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:08.501415  534894 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:36:08.513111  534894 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.228
	I0127 12:36:08.513147  534894 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:36:08.513163  534894 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 12:36:08.513216  534894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:36:08.561176  534894 cri.go:89] found id: "05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2"
	I0127 12:36:08.561203  534894 cri.go:89] found id: "a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250"
	I0127 12:36:08.561209  534894 cri.go:89] found id: "357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271"
	I0127 12:36:08.561214  534894 cri.go:89] found id: "8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b"
	I0127 12:36:08.561218  534894 cri.go:89] found id: "631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a"
	I0127 12:36:08.561223  534894 cri.go:89] found id: "3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13"
	I0127 12:36:08.561227  534894 cri.go:89] found id: "ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76"
	I0127 12:36:08.561231  534894 cri.go:89] found id: "0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c"
	I0127 12:36:08.561235  534894 cri.go:89] found id: ""
	I0127 12:36:08.561242  534894 cri.go:252] Stopping containers: [05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2 a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250 357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271 8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b 631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a 3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13 ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76 0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c]
	I0127 12:36:08.561301  534894 ssh_runner.go:195] Run: which crictl
	I0127 12:36:08.565588  534894 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2 a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250 357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271 8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b 631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a 3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13 ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76 0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c
	I0127 12:36:08.619372  534894 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:36:08.636553  534894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:36:08.648359  534894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:36:08.648385  534894 kubeadm.go:157] found existing configuration files:
	
	I0127 12:36:08.648439  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:36:08.659186  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:36:08.659257  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:36:08.668828  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:36:08.679551  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:36:08.679624  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:36:08.689530  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:36:08.701111  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:36:08.701164  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:36:08.709830  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:36:08.718407  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:36:08.718495  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:36:08.727400  534894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:36:08.736296  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:08.887779  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:09.818917  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:10.080535  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:10.159744  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:10.232154  534894 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:10.232252  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:10.732454  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:11.233357  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:11.264081  534894 api_server.go:72] duration metric: took 1.031921463s to wait for apiserver process to appear ...
	I0127 12:36:11.264115  534894 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:11.264142  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:11.264724  534894 api_server.go:269] stopped: https://192.168.39.228:8443/healthz: Get "https://192.168.39.228:8443/healthz": dial tcp 192.168.39.228:8443: connect: connection refused
	I0127 12:36:11.764442  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:14.358365  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:36:14.358472  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:36:14.358502  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:14.408913  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:36:14.409034  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:36:14.764463  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:14.771512  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:36:14.771584  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:36:15.264813  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:15.270318  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:36:15.270344  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:36:15.765063  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:15.772704  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:36:15.772774  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:36:16.264285  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:16.271130  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0127 12:36:16.281041  534894 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:16.281071  534894 api_server.go:131] duration metric: took 5.016947638s to wait for apiserver health ...
	I0127 12:36:16.281087  534894 cni.go:84] Creating CNI manager for ""
	I0127 12:36:16.281096  534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:36:16.282806  534894 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:36:16.284232  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:36:16.297533  534894 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:36:16.314501  534894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:16.324319  534894 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:16.324349  534894 system_pods.go:61] "coredns-668d6bf9bc-n6hwn" [24d3582e-97d0-4bb8-b12a-6f69ecd72309] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:16.324357  534894 system_pods.go:61] "coredns-668d6bf9bc-vg4bb" [3d20d4a5-8ddf-4166-af63-47beab76d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:16.324365  534894 system_pods.go:61] "etcd-newest-cni-610630" [0a812f8b-1e38-49a2-be17-9db3fb1979db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:36:16.324379  534894 system_pods.go:61] "kube-apiserver-newest-cni-610630" [292fb9b9-ccfb-49b5-892d-078e5897981d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:36:16.324385  534894 system_pods.go:61] "kube-controller-manager-newest-cni-610630" [36d1c2e6-42ce-4d4a-8bba-ff7beb6551ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:36:16.324391  534894 system_pods.go:61] "kube-proxy-8szpt" [11487c0b-6afa-4c83-9eb8-6a9f609f7b58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 12:36:16.324395  534894 system_pods.go:61] "kube-scheduler-newest-cni-610630" [2f8744a9-54dc-4b5c-92f6-b0d3b9b0de7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:36:16.324400  534894 system_pods.go:61] "metrics-server-f79f97bbb-kcc5g" [6593df15-330d-4389-b878-45d396d718b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:16.324408  534894 system_pods.go:61] "storage-provisioner" [3cc7604e-fcbb-48e5-8445-82d5150b759f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:36:16.324413  534894 system_pods.go:74] duration metric: took 9.892595ms to wait for pod list to return data ...
	I0127 12:36:16.324424  534894 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:36:16.327339  534894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:36:16.327364  534894 node_conditions.go:123] node cpu capacity is 2
	I0127 12:36:16.327385  534894 node_conditions.go:105] duration metric: took 2.956884ms to run NodePressure ...
	I0127 12:36:16.327404  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:16.991253  534894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:36:17.011999  534894 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:17.012027  534894 kubeadm.go:597] duration metric: took 8.527837095s to restartPrimaryControlPlane
	I0127 12:36:17.012040  534894 kubeadm.go:394] duration metric: took 8.608822701s to StartCluster
	I0127 12:36:17.012072  534894 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:17.012204  534894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:17.014682  534894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:17.015030  534894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:17.015158  534894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:17.015477  534894 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-610630"
	I0127 12:36:17.015505  534894 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-610630"
	I0127 12:36:17.015320  534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:17.015542  534894 addons.go:69] Setting metrics-server=true in profile "newest-cni-610630"
	I0127 12:36:17.015555  534894 addons.go:238] Setting addon metrics-server=true in "newest-cni-610630"
	W0127 12:36:17.015562  534894 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:17.015556  534894 addons.go:69] Setting default-storageclass=true in profile "newest-cni-610630"
	I0127 12:36:17.015582  534894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-610630"
	I0127 12:36:17.015588  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.015521  534894 addons.go:69] Setting dashboard=true in profile "newest-cni-610630"
	I0127 12:36:17.015608  534894 addons.go:238] Setting addon dashboard=true in "newest-cni-610630"
	W0127 12:36:17.015617  534894 addons.go:247] addon dashboard should already be in state true
	I0127 12:36:17.015643  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.016040  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.016039  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.016050  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	W0127 12:36:17.015533  534894 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:17.016079  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.016082  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.016083  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.016420  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.016423  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.016450  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.031224  534894 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:17.032914  534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:17.036836  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0127 12:36:17.037340  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.037862  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.037882  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.038318  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.038866  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.038905  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.039846  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0127 12:36:17.040182  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.040873  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.040890  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.041292  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.041587  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.045301  534894 addons.go:238] Setting addon default-storageclass=true in "newest-cni-610630"
	W0127 12:36:17.045320  534894 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:17.045352  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.045759  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.045799  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.048089  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0127 12:36:17.048729  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.049195  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.049213  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.049644  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.050180  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.050222  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.050700  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0127 12:36:17.051087  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.051560  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.051581  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.051971  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.052563  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.052600  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.065040  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0127 12:36:17.065537  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.066047  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.066072  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.066400  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.066556  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.068438  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.070276  534894 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:17.071684  534894 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:17.072821  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:17.072844  534894 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:17.072867  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.073985  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0127 12:36:17.074526  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.075082  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.075099  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.075677  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.076310  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.076356  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.078889  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.079441  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.079463  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.079747  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.079954  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.080136  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.080333  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.091530  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I0127 12:36:17.092126  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.092669  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.092694  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.093285  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.093437  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.095189  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I0127 12:36:17.095304  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.095761  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.096341  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.096358  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.096828  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.097030  534894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:17.097195  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.097833  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40641
	I0127 12:36:17.098239  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.098254  534894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:17.098271  534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:17.098299  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.098871  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.098889  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.099255  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.099465  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.099541  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.100856  534894 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:12.874242  532607 pod_ready.go:93] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.874282  532607 pod_ready.go:82] duration metric: took 9.010574512s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.874303  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.882689  532607 pod_ready.go:93] pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.882775  532607 pod_ready.go:82] duration metric: took 8.462495ms for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.882801  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.888659  532607 pod_ready.go:93] pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.888693  532607 pod_ready.go:82] duration metric: took 5.874272ms for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.888707  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-smp6l" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.894080  532607 pod_ready.go:93] pod "kube-proxy-smp6l" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.894141  532607 pod_ready.go:82] duration metric: took 5.425838ms for pod "kube-proxy-smp6l" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.894163  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.900793  532607 pod_ready.go:93] pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.900849  532607 pod_ready.go:82] duration metric: took 6.668808ms for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.900869  532607 pod_ready.go:39] duration metric: took 9.044300135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:12.900904  532607 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:12.900995  532607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:12.922995  532607 api_server.go:72] duration metric: took 9.343524429s to wait for apiserver process to appear ...
	I0127 12:36:12.923066  532607 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:12.923097  532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
	I0127 12:36:12.930234  532607 api_server.go:279] https://192.168.50.206:8443/healthz returned 200:
	ok
	I0127 12:36:12.931482  532607 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:12.931504  532607 api_server.go:131] duration metric: took 8.421115ms to wait for apiserver health ...
	I0127 12:36:12.931513  532607 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:13.073659  532607 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:13.073701  532607 system_pods.go:61] "coredns-668d6bf9bc-46nfk" [ca146154-7693-43e5-ae2a-f0c3148327b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:13.073712  532607 system_pods.go:61] "coredns-668d6bf9bc-9p64b" [4d44d79e-ea3d-4085-9fb2-356746e71e9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:13.073722  532607 system_pods.go:61] "etcd-embed-certs-346100" [cb00782a-b078-43ee-aa3f-4806aa7629d6] Running
	I0127 12:36:13.073729  532607 system_pods.go:61] "kube-apiserver-embed-certs-346100" [7b0a8d77-4737-4bde-8e2a-2462c524f9a2] Running
	I0127 12:36:13.073735  532607 system_pods.go:61] "kube-controller-manager-embed-certs-346100" [196254b2-812b-43a4-ae10-d55a11957faf] Running
	I0127 12:36:13.073741  532607 system_pods.go:61] "kube-proxy-smp6l" [886c9cd4-795b-4e33-a16e-e12302c37665] Running
	I0127 12:36:13.073746  532607 system_pods.go:61] "kube-scheduler-embed-certs-346100" [90cbc1fe-52a3-45d8-a8e9-edc60f5c4829] Running
	I0127 12:36:13.073754  532607 system_pods.go:61] "metrics-server-f79f97bbb-w8fsn" [3a78ab43-37b0-4dc0-89a9-59a558ef997c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:13.073811  532607 system_pods.go:61] "storage-provisioner" [0d021617-8412-4f33-ba4f-2b3b458721ff] Running
	I0127 12:36:13.073828  532607 system_pods.go:74] duration metric: took 142.306493ms to wait for pod list to return data ...
	I0127 12:36:13.073848  532607 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:13.273298  532607 default_sa.go:45] found service account: "default"
	I0127 12:36:13.273415  532607 default_sa.go:55] duration metric: took 199.555226ms for default service account to be created ...
	I0127 12:36:13.273446  532607 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:36:13.477525  532607 system_pods.go:87] 9 kube-system pods found
	I0127 12:36:17.101529  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.101719  534894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:17.101731  534894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:17.101745  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.102276  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:17.102295  534894 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:17.102329  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.102718  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.103291  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.103308  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.103462  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.103607  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.103729  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.103834  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.106885  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.107336  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.107361  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.107579  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.107585  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.107768  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.107957  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.108065  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.108184  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.108305  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.108457  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.108478  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.108587  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.108674  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.319272  534894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:17.355389  534894 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:17.355483  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:17.383883  534894 api_server.go:72] duration metric: took 368.528555ms to wait for apiserver process to appear ...
	I0127 12:36:17.383915  534894 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:17.383940  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:17.392047  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0127 12:36:17.393460  534894 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:17.393491  534894 api_server.go:131] duration metric: took 9.56764ms to wait for apiserver health ...
	I0127 12:36:17.393503  534894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:17.419483  534894 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:17.419523  534894 system_pods.go:61] "coredns-668d6bf9bc-n6hwn" [24d3582e-97d0-4bb8-b12a-6f69ecd72309] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:17.419533  534894 system_pods.go:61] "coredns-668d6bf9bc-vg4bb" [3d20d4a5-8ddf-4166-af63-47beab76d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:17.419543  534894 system_pods.go:61] "etcd-newest-cni-610630" [0a812f8b-1e38-49a2-be17-9db3fb1979db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:36:17.419550  534894 system_pods.go:61] "kube-apiserver-newest-cni-610630" [292fb9b9-ccfb-49b5-892d-078e5897981d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:36:17.419559  534894 system_pods.go:61] "kube-controller-manager-newest-cni-610630" [36d1c2e6-42ce-4d4a-8bba-ff7beb6551ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:36:17.419565  534894 system_pods.go:61] "kube-proxy-8szpt" [11487c0b-6afa-4c83-9eb8-6a9f609f7b58] Running
	I0127 12:36:17.419574  534894 system_pods.go:61] "kube-scheduler-newest-cni-610630" [2f8744a9-54dc-4b5c-92f6-b0d3b9b0de7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:36:17.419582  534894 system_pods.go:61] "metrics-server-f79f97bbb-kcc5g" [6593df15-330d-4389-b878-45d396d718b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:17.419591  534894 system_pods.go:61] "storage-provisioner" [3cc7604e-fcbb-48e5-8445-82d5150b759f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:36:17.419601  534894 system_pods.go:74] duration metric: took 26.090469ms to wait for pod list to return data ...
	I0127 12:36:17.419614  534894 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:17.422917  534894 default_sa.go:45] found service account: "default"
	I0127 12:36:17.422941  534894 default_sa.go:55] duration metric: took 3.317044ms for default service account to be created ...
	I0127 12:36:17.422956  534894 kubeadm.go:582] duration metric: took 407.606907ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 12:36:17.422975  534894 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:36:17.429059  534894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:36:17.429091  534894 node_conditions.go:123] node cpu capacity is 2
	I0127 12:36:17.429116  534894 node_conditions.go:105] duration metric: took 6.133766ms to run NodePressure ...
	I0127 12:36:17.429138  534894 start.go:241] waiting for startup goroutines ...
	I0127 12:36:17.493751  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:17.493777  534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:17.496271  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:17.540289  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:17.540321  534894 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:17.595530  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:17.595565  534894 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:17.609027  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:17.609055  534894 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:17.726024  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:17.764459  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:17.764492  534894 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:17.764569  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:17.852391  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:17.852429  534894 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:17.964392  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:17.964417  534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:18.185418  532844 kubeadm.go:310] [api-check] The API server is healthy after 6.002059282s
	I0127 12:36:18.204454  532844 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:36:18.218201  532844 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:36:18.245054  532844 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:36:18.245331  532844 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-887672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:36:18.257186  532844 kubeadm.go:310] [bootstrap-token] Using token: 5yhtlj.kyb5uzy41lrz34us
	I0127 12:36:18.258581  532844 out.go:235]   - Configuring RBAC rules ...
	I0127 12:36:18.258747  532844 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:36:18.265191  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:36:18.272296  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:36:18.285037  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:36:18.285204  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:36:18.285313  532844 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:36:18.593364  532844 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:36:19.042942  532844 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:36:19.593432  532844 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:36:19.594797  532844 kubeadm.go:310] 
	I0127 12:36:19.594875  532844 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:36:19.594888  532844 kubeadm.go:310] 
	I0127 12:36:19.594970  532844 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:36:19.594981  532844 kubeadm.go:310] 
	I0127 12:36:19.595011  532844 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:36:19.595081  532844 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:36:19.595152  532844 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:36:19.595166  532844 kubeadm.go:310] 
	I0127 12:36:19.595239  532844 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:36:19.595246  532844 kubeadm.go:310] 
	I0127 12:36:19.595301  532844 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:36:19.595308  532844 kubeadm.go:310] 
	I0127 12:36:19.595371  532844 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:36:19.595464  532844 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:36:19.595545  532844 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:36:19.595554  532844 kubeadm.go:310] 
	I0127 12:36:19.595667  532844 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:36:19.595757  532844 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:36:19.595767  532844 kubeadm.go:310] 
	I0127 12:36:19.595869  532844 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5yhtlj.kyb5uzy41lrz34us \
	I0127 12:36:19.595998  532844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
	I0127 12:36:19.596017  532844 kubeadm.go:310] 	--control-plane 
	I0127 12:36:19.596021  532844 kubeadm.go:310] 
	I0127 12:36:19.596121  532844 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:36:19.596137  532844 kubeadm.go:310] 
	I0127 12:36:19.596223  532844 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5yhtlj.kyb5uzy41lrz34us \
	I0127 12:36:19.596305  532844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 
	I0127 12:36:19.598645  532844 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:36:19.598687  532844 cni.go:84] Creating CNI manager for ""
	I0127 12:36:19.598696  532844 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:36:19.600188  532844 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:36:18.113709  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:18.113742  534894 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:18.153599  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:18.153635  534894 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:18.176500  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:18.176539  534894 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:18.216973  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:18.217007  534894 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:18.274511  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:18.274583  534894 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:18.342333  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:18.361302  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:18.361342  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:18.361665  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:18.361699  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:18.361710  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:18.361719  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:18.362117  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:18.362140  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:18.362144  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:18.371041  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:18.371065  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:18.371339  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:18.371377  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.594328  534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.868263184s)
	I0127 12:36:19.594692  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.594482  534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.829887156s)
	I0127 12:36:19.594790  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.594804  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.595140  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:19.595208  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.595219  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.595238  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.595247  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.595556  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:19.595579  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.595600  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.595618  534894 addons.go:479] Verifying addon metrics-server=true in "newest-cni-610630"
	I0127 12:36:19.596388  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.596722  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.596754  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.596763  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.596770  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.597063  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:19.597086  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.597098  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:20.095246  534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.752863121s)
	I0127 12:36:20.095306  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:20.095324  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:20.095623  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:20.095685  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:20.095695  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:20.095711  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:20.095721  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:20.096021  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:20.096038  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:20.096055  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:20.097482  534894 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-610630 addons enable metrics-server
	
	I0127 12:36:20.098730  534894 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0127 12:36:20.099860  534894 addons.go:514] duration metric: took 3.084737287s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0127 12:36:20.099913  534894 start.go:246] waiting for cluster config update ...
	I0127 12:36:20.099934  534894 start.go:255] writing updated cluster config ...
	I0127 12:36:20.100260  534894 ssh_runner.go:195] Run: rm -f paused
	I0127 12:36:20.153018  534894 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:36:20.154413  534894 out.go:177] * Done! kubectl is now configured to use "newest-cni-610630" cluster and "default" namespace by default
	I0127 12:36:19.601391  532844 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:36:19.615483  532844 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:36:19.641045  532844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:36:19.641123  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:19.641161  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-887672 minikube.k8s.io/updated_at=2025_01_27T12_36_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=default-k8s-diff-port-887672 minikube.k8s.io/primary=true
	I0127 12:36:19.655315  532844 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:19.893685  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:20.394472  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:20.893933  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:21.394823  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:21.893992  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:22.393950  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:22.894084  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:23.394506  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:23.893909  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:24.393790  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:24.491305  532844 kubeadm.go:1113] duration metric: took 4.850249048s to wait for elevateKubeSystemPrivileges
	I0127 12:36:24.491356  532844 kubeadm.go:394] duration metric: took 4m37.901720321s to StartCluster
	I0127 12:36:24.491385  532844 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:24.491488  532844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:24.493752  532844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:24.494040  532844 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.130 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:24.494175  532844 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:24.494273  532844 config.go:182] Loaded profile config "default-k8s-diff-port-887672": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:24.494285  532844 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494323  532844 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-887672"
	I0127 12:36:24.494316  532844 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494338  532844 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494372  532844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-887672"
	I0127 12:36:24.494381  532844 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.494394  532844 addons.go:247] addon dashboard should already be in state true
	W0127 12:36:24.494332  532844 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:24.494432  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494463  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494323  532844 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494553  532844 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.494564  532844 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:24.494598  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494863  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494871  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494871  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494905  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.494911  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495037  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.495049  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495123  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495481  532844 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:24.496811  532844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:24.513577  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0127 12:36:24.514115  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.514694  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.514720  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.515161  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.515484  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
	I0127 12:36:24.515836  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0127 12:36:24.515999  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0127 12:36:24.516094  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.516144  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.516192  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.516413  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.516675  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.516695  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.516974  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.516994  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.517001  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.517393  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.517583  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.517647  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.518197  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.518252  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.518469  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.518494  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.518868  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.519422  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.519470  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.521629  532844 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.521653  532844 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:24.521684  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.522040  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.522081  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.534712  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0127 12:36:24.535195  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.536504  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.536527  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.536554  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0127 12:36:24.536902  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.536959  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.537111  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.537597  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.537616  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.537969  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.538145  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.538989  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0127 12:36:24.539580  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.540009  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0127 12:36:24.540196  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.540422  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.540715  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.540879  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.540902  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.540934  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.540948  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.541341  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.541388  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.541685  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.542042  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.542090  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.542251  532844 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:24.542373  532844 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:24.543206  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.543412  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:24.543430  532844 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:24.543460  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.544493  532844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:24.545545  532844 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:24.545643  532844 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:24.545656  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:24.545671  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.546541  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:24.546563  532844 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:24.546584  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.547093  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.547276  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.547478  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.547900  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.548065  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.547944  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.548278  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.549918  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550146  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.550170  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550429  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.550517  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550608  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.550758  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.550914  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.550956  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.550993  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.551165  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.551308  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.551460  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.551595  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.566621  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0127 12:36:24.567007  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.567434  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.567460  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.567879  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.568040  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.569632  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.569844  532844 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:24.569859  532844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:24.569875  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.572937  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.573361  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.573377  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.573577  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.573757  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.573888  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.574044  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.747290  532844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:24.779846  532844 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-887672" to be "Ready" ...
	I0127 12:36:24.813551  532844 node_ready.go:49] node "default-k8s-diff-port-887672" has status "Ready":"True"
	I0127 12:36:24.813582  532844 node_ready.go:38] duration metric: took 33.68566ms for node "default-k8s-diff-port-887672" to be "Ready" ...
	I0127 12:36:24.813594  532844 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:24.825398  532844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:24.855841  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:24.855869  532844 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:24.865288  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:24.890399  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:24.907963  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:24.907990  532844 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:24.923409  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:24.923434  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:24.967186  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:24.967211  532844 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:25.003133  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:25.003167  532844 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:25.031491  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:25.031515  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:25.086171  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:25.086201  532844 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:25.147825  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:25.152298  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:25.152324  532844 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:25.203235  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:25.203264  532844 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:25.242547  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:25.242578  532844 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:25.281622  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:25.281659  532844 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:25.312416  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:25.312444  532844 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:25.365802  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:25.651534  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.651566  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.651590  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.651612  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.651995  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652009  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652020  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652021  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652033  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.652036  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652040  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.652047  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652055  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.652063  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.652511  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652572  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652594  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652580  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652592  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652796  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.667377  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.667403  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.667693  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.667709  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.974214  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.974246  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.974553  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.974574  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.974591  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.974600  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.974992  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.975017  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.975032  532844 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-887672"
	I0127 12:36:26.960702  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:27.097489  532844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.731632212s)
	I0127 12:36:27.097551  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:27.097567  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:27.097886  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:27.097909  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:27.097909  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:27.097917  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:27.097935  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:27.098221  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:27.098291  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:27.099837  532844 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-887672 addons enable metrics-server
	
	I0127 12:36:27.101354  532844 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 12:36:27.102395  532844 addons.go:514] duration metric: took 2.608238219s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 12:36:29.331790  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:31.334726  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:33.834237  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:34.374688  532844 pod_ready.go:93] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.374713  532844 pod_ready.go:82] duration metric: took 9.549290033s for pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.374725  532844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.399299  532844 pod_ready.go:93] pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.399323  532844 pod_ready.go:82] duration metric: took 24.589743ms for pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.399332  532844 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.421329  532844 pod_ready.go:93] pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.421359  532844 pod_ready.go:82] duration metric: took 22.019877ms for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.421399  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.427922  532844 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.427946  532844 pod_ready.go:82] duration metric: took 6.537775ms for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.427957  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.447675  532844 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.447701  532844 pod_ready.go:82] duration metric: took 19.736139ms for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.447713  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xl46c" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.729783  532844 pod_ready.go:93] pod "kube-proxy-xl46c" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.729827  532844 pod_ready.go:82] duration metric: took 282.092476ms for pod "kube-proxy-xl46c" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.729841  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:35.128755  532844 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:35.128781  532844 pod_ready.go:82] duration metric: took 398.931642ms for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:35.128790  532844 pod_ready.go:39] duration metric: took 10.315186396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:35.128806  532844 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:35.128870  532844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:35.148548  532844 api_server.go:72] duration metric: took 10.654456335s to wait for apiserver process to appear ...
	I0127 12:36:35.148574  532844 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:35.148597  532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
	I0127 12:36:35.156175  532844 api_server.go:279] https://192.168.61.130:8444/healthz returned 200:
	ok
	I0127 12:36:35.157842  532844 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:35.157866  532844 api_server.go:131] duration metric: took 9.283401ms to wait for apiserver health ...
	I0127 12:36:35.157875  532844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:35.339567  532844 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:35.339606  532844 system_pods.go:61] "coredns-668d6bf9bc-jc882" [cc7b1851-f0b2-406d-b972-155b02dcefc6] Running
	I0127 12:36:35.339614  532844 system_pods.go:61] "coredns-668d6bf9bc-s6rln" [553e1b5c-1bb3-48f4-bf25-6837dae6b627] Running
	I0127 12:36:35.339620  532844 system_pods.go:61] "etcd-default-k8s-diff-port-887672" [cfe71b01-c4c5-4772-904f-0f22ebdc9481] Running
	I0127 12:36:35.339625  532844 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-887672" [09952f8b-2235-45c2-aac8-328369a341dd] Running
	I0127 12:36:35.339631  532844 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-887672" [6aee732f-0e4f-4362-b2d5-38e533a146c4] Running
	I0127 12:36:35.339636  532844 system_pods.go:61] "kube-proxy-xl46c" [c2ddd14b-3d9e-4985-935e-5f64d188e68e] Running
	I0127 12:36:35.339641  532844 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-887672" [7a436b79-cc6a-4311-9cb6-24537ed6aed0] Running
	I0127 12:36:35.339652  532844 system_pods.go:61] "metrics-server-f79f97bbb-twqz4" [107a2af6-937d-4c95-a8dd-f47f59dd3afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:35.339659  532844 system_pods.go:61] "storage-provisioner" [ebd493f5-ab93-4083-8174-aceb44741e99] Running
	I0127 12:36:35.339675  532844 system_pods.go:74] duration metric: took 181.791009ms to wait for pod list to return data ...
	I0127 12:36:35.339689  532844 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:35.528977  532844 default_sa.go:45] found service account: "default"
	I0127 12:36:35.529018  532844 default_sa.go:55] duration metric: took 189.31757ms for default service account to be created ...
	I0127 12:36:35.529033  532844 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:36:35.732388  532844 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	5ef8615c6d829       523cad1a4df73       35 seconds ago      Exited              dashboard-metrics-scraper   9                   81faba96a2d37       dashboard-metrics-scraper-86c6bf9756-h7zkk
	81a749edca940       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   a4c287d6a7536       kubernetes-dashboard-7779f9b69b-nm2fk
	b2f1377c22829       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   141546d9f4781       storage-provisioner
	3591fabb069db       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   c6f50b02dc2d3       coredns-668d6bf9bc-9p64b
	ba64df030c05c       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   dd401412d664e       coredns-668d6bf9bc-46nfk
	57a670cb1ad4b       e29f9c7391fd9       21 minutes ago      Running             kube-proxy                  0                   b4571798a47cd       kube-proxy-smp6l
	1e9b3e17732cd       019ee182b58e2       22 minutes ago      Running             kube-controller-manager     2                   9973dd13f63f8       kube-controller-manager-embed-certs-346100
	70743d4589c45       2b0d6572d062c       22 minutes ago      Running             kube-scheduler              2                   75a1ef1ea4197       kube-scheduler-embed-certs-346100
	d781ae221c1a4       95c0bda56fc4d       22 minutes ago      Running             kube-apiserver              2                   68d11de851c26       kube-apiserver-embed-certs-346100
	9c9f723685461       a9e7e6b294baf       22 minutes ago      Running             etcd                        2                   2a2c3efe33d40       etcd-embed-certs-346100
	
	
	==> containerd <==
	Jan 27 12:52:06 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:06.037625243Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 12:52:06 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:06.039560271Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 12:52:06 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:06.039636670Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 12:52:24 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:24.030726418Z" level=info msg="CreateContainer within sandbox \"81faba96a2d3773dbacddd14c25bc93a576e28209f469e5393d8c6eb74aed62f\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 12:52:24 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:24.054829003Z" level=info msg="CreateContainer within sandbox \"81faba96a2d3773dbacddd14c25bc93a576e28209f469e5393d8c6eb74aed62f\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"d3db57958f8dabaa86b6711f7b77508b30d37fb1b2d9fc6c16eb997a97f1c2f0\""
	Jan 27 12:52:24 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:24.055673414Z" level=info msg="StartContainer for \"d3db57958f8dabaa86b6711f7b77508b30d37fb1b2d9fc6c16eb997a97f1c2f0\""
	Jan 27 12:52:24 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:24.131100385Z" level=info msg="StartContainer for \"d3db57958f8dabaa86b6711f7b77508b30d37fb1b2d9fc6c16eb997a97f1c2f0\" returns successfully"
	Jan 27 12:52:24 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:24.176508453Z" level=info msg="shim disconnected" id=d3db57958f8dabaa86b6711f7b77508b30d37fb1b2d9fc6c16eb997a97f1c2f0 namespace=k8s.io
	Jan 27 12:52:24 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:24.176644123Z" level=warning msg="cleaning up after shim disconnected" id=d3db57958f8dabaa86b6711f7b77508b30d37fb1b2d9fc6c16eb997a97f1c2f0 namespace=k8s.io
	Jan 27 12:52:24 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:24.176718370Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:52:24 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:24.431430796Z" level=info msg="RemoveContainer for \"769f76a9a51aa2454566981d697d2d035a61f0d7d34cb6f342c5f37f6112c9bc\""
	Jan 27 12:52:24 embed-certs-346100 containerd[555]: time="2025-01-27T12:52:24.442910736Z" level=info msg="RemoveContainer for \"769f76a9a51aa2454566981d697d2d035a61f0d7d34cb6f342c5f37f6112c9bc\" returns successfully"
	Jan 27 12:57:21 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:21.028514482Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:57:21 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:21.036373670Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 12:57:21 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:21.038605259Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 12:57:21 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:21.038627428Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 12:57:25 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:25.032003801Z" level=info msg="CreateContainer within sandbox \"81faba96a2d3773dbacddd14c25bc93a576e28209f469e5393d8c6eb74aed62f\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 12:57:25 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:25.055840447Z" level=info msg="CreateContainer within sandbox \"81faba96a2d3773dbacddd14c25bc93a576e28209f469e5393d8c6eb74aed62f\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"5ef8615c6d82944a53aa9ddb0b9d8100d221c24ee40fb9c997fc4049ce2185da\""
	Jan 27 12:57:25 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:25.056639645Z" level=info msg="StartContainer for \"5ef8615c6d82944a53aa9ddb0b9d8100d221c24ee40fb9c997fc4049ce2185da\""
	Jan 27 12:57:25 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:25.131646163Z" level=info msg="StartContainer for \"5ef8615c6d82944a53aa9ddb0b9d8100d221c24ee40fb9c997fc4049ce2185da\" returns successfully"
	Jan 27 12:57:25 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:25.169905597Z" level=info msg="shim disconnected" id=5ef8615c6d82944a53aa9ddb0b9d8100d221c24ee40fb9c997fc4049ce2185da namespace=k8s.io
	Jan 27 12:57:25 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:25.170023887Z" level=warning msg="cleaning up after shim disconnected" id=5ef8615c6d82944a53aa9ddb0b9d8100d221c24ee40fb9c997fc4049ce2185da namespace=k8s.io
	Jan 27 12:57:25 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:25.170038003Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:57:26 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:26.112319560Z" level=info msg="RemoveContainer for \"d3db57958f8dabaa86b6711f7b77508b30d37fb1b2d9fc6c16eb997a97f1c2f0\""
	Jan 27 12:57:26 embed-certs-346100 containerd[555]: time="2025-01-27T12:57:26.119452513Z" level=info msg="RemoveContainer for \"d3db57958f8dabaa86b6711f7b77508b30d37fb1b2d9fc6c16eb997a97f1c2f0\" returns successfully"
	
	
	==> coredns [3591fabb069db049fc3eb7ede58cf2382e5d69c378525ec0691de6145e329f1b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ba64df030c05c5e87f8173ec25d1e4ebe69a7f1d40618741fe21c29b38c8bb68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-346100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-346100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=embed-certs-346100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_35_59_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:35:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-346100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:57:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:56:46 +0000   Mon, 27 Jan 2025 12:35:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:56:46 +0000   Mon, 27 Jan 2025 12:35:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:56:46 +0000   Mon, 27 Jan 2025 12:35:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:56:46 +0000   Mon, 27 Jan 2025 12:35:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.206
	  Hostname:    embed-certs-346100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d17cad368854f87bb8a89328773f947
	  System UUID:                8d17cad3-6885-4f87-bb8a-89328773f947
	  Boot ID:                    a8e8cb87-4549-4bf1-bebb-d9113847c0ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-46nfk                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-9p64b                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-346100                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-embed-certs-346100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-embed-certs-346100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-smp6l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-346100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-f79f97bbb-w8fsn                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-h7zkk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-nm2fk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-346100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-346100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-346100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-346100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-346100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-346100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-346100 event: Registered Node embed-certs-346100 in Controller
	
	
	==> dmesg <==
	[  +0.054404] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041576] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.884793] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.041421] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.549427] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.590574] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +0.084654] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076211] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.202895] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +0.126947] systemd-fstab-generator[516]: Ignoring "noauto" option for root device
	[  +0.298026] systemd-fstab-generator[547]: Ignoring "noauto" option for root device
	[  +1.826769] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +2.301091] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +0.254356] kauditd_printk_skb: 214 callbacks suppressed
	[  +5.002716] kauditd_printk_skb: 63 callbacks suppressed
	[  +7.163913] kauditd_printk_skb: 69 callbacks suppressed
	[Jan27 12:35] systemd-fstab-generator[3080]: Ignoring "noauto" option for root device
	[  +6.579877] systemd-fstab-generator[3445]: Ignoring "noauto" option for root device
	[  +0.076485] kauditd_printk_skb: 87 callbacks suppressed
	[Jan27 12:36] systemd-fstab-generator[3543]: Ignoring "noauto" option for root device
	[  +0.626489] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.460202] kauditd_printk_skb: 86 callbacks suppressed
	[  +6.293886] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [9c9f72368546188f274710d1ff28e6964f32c9d72d962c0082e71960f2fa2733] <==
	{"level":"info","ts":"2025-01-27T12:35:54.891022Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T12:35:54.891067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T12:35:54.891721Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:35:54.892697Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.206:2379"}
	{"level":"info","ts":"2025-01-27T12:35:54.906905Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dab912e2dad5f338","local-member-id":"70b03cc0c46c7b68","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:35:54.907186Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:35:54.907237Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:36:16.896084Z","caller":"traceutil/trace.go:171","msg":"trace[1640858678] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"332.921ms","start":"2025-01-27T12:36:16.560912Z","end":"2025-01-27T12:36:16.893833Z","steps":["trace[1640858678] 'process raft request'  (duration: 332.598082ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:36:16.905242Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.892385ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:36:16.905349Z","caller":"traceutil/trace.go:171","msg":"trace[1448880720] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:543; }","duration":"263.151722ms","start":"2025-01-27T12:36:16.642183Z","end":"2025-01-27T12:36:16.905334Z","steps":["trace[1448880720] 'agreement among raft nodes before linearized reading'  (duration: 257.855447ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:36:16.895284Z","caller":"traceutil/trace.go:171","msg":"trace[89196011] linearizableReadLoop","detail":"{readStateIndex:556; appliedIndex:555; }","duration":"251.431515ms","start":"2025-01-27T12:36:16.642220Z","end":"2025-01-27T12:36:16.893651Z","steps":["trace[89196011] 'read index received'  (duration: 251.269729ms)","trace[89196011] 'applied index is now lower than readState.Index'  (duration: 161.075µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T12:36:16.906039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.444198ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:36:16.906074Z","caller":"traceutil/trace.go:171","msg":"trace[1652630605] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"221.509039ms","start":"2025-01-27T12:36:16.684555Z","end":"2025-01-27T12:36:16.906064Z","steps":["trace[1652630605] 'agreement among raft nodes before linearized reading'  (duration: 221.443305ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:36:16.906234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.331359ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:36:16.906261Z","caller":"traceutil/trace.go:171","msg":"trace[2100206820] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:543; }","duration":"159.361508ms","start":"2025-01-27T12:36:16.746892Z","end":"2025-01-27T12:36:16.906253Z","steps":["trace[2100206820] 'agreement among raft nodes before linearized reading'  (duration: 159.319205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:36:16.930739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T12:36:16.560895Z","time spent":"339.177323ms","remote":"127.0.0.1:40920","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:542 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-01-27T12:45:54.920628Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":872}
	{"level":"info","ts":"2025-01-27T12:45:54.955399Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":872,"took":"33.556033ms","hash":2656213590,"current-db-size-bytes":3039232,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":3039232,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-27T12:45:54.955689Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2656213590,"revision":872,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T12:50:54.928754Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1123}
	{"level":"info","ts":"2025-01-27T12:50:54.933605Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1123,"took":"3.914572ms","hash":1720518875,"current-db-size-bytes":3039232,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1826816,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:50:54.933709Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1720518875,"revision":1123,"compact-revision":872}
	{"level":"info","ts":"2025-01-27T12:55:54.935060Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1375}
	{"level":"info","ts":"2025-01-27T12:55:54.939873Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1375,"took":"3.837274ms","hash":1841846432,"current-db-size-bytes":3039232,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1835008,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:55:54.939973Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1841846432,"revision":1375,"compact-revision":1123}
	
	
	==> kernel <==
	 12:58:00 up 26 min,  0 users,  load average: 0.30, 0.31, 0.25
	Linux embed-certs-346100 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d781ae221c1a459f36348bcdb17dd760461946efd1b3bf351c9a6047f2342245] <==
	I0127 12:53:57.297468       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:53:57.297519       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:55:56.291162       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:55:56.291544       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:55:57.293166       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:55:57.293274       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:55:57.293530       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:55:57.293694       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 12:55:57.294548       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:55:57.295650       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:56:57.295141       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:56:57.295441       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:56:57.296262       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:56:57.296361       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 12:56:57.297548       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:56:57.297599       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1e9b3e17732cdbeafbc6d7e5403fed8765fa1ed8898515e00e9212864dd7da94] <==
	E0127 12:53:03.033036       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:53:03.068458       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:53:33.038869       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:53:33.076003       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:54:03.045250       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:54:03.083418       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:54:33.054241       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:54:33.091331       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:55:03.061274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:55:03.098990       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:55:33.067097       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:55:33.108427       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:56:03.073584       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:56:03.115772       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:56:33.080026       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:56:33.121921       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:56:46.063545       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-346100"
	E0127 12:57:03.087282       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:57:03.128774       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:57:26.129108       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="420.151µs"
	I0127 12:57:28.130780       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="51.126µs"
	E0127 12:57:33.093525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:57:33.136112       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:57:34.043803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="193.716µs"
	I0127 12:57:49.047416       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="142.831µs"
	
	
	==> kube-proxy [57a670cb1ad4b88a67fa20cf60cd05fbcdcbb83eebd605a4e76c0e99ca97fcd7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:36:05.519457       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:36:05.605142       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.206"]
	E0127 12:36:05.605212       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:05.724017       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:05.724051       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:05.724071       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:05.729056       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:05.729467       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:05.729752       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:05.733593       1 config.go:199] "Starting service config controller"
	I0127 12:36:05.733855       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:05.734039       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:05.734403       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:05.737216       1 config.go:329] "Starting node config controller"
	I0127 12:36:05.738174       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:05.834651       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:05.834712       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:05.839361       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [70743d4589c458713af43fcd00b6098f1397d79c45676ccaeb5bfc0d2204afee] <==
	W0127 12:35:57.259455       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 12:35:57.259663       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.302587       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 12:35:57.302649       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.302715       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 12:35:57.302765       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.367108       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 12:35:57.367164       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.373822       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 12:35:57.374014       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.404668       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 12:35:57.404905       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.451388       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 12:35:57.451446       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.541435       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 12:35:57.541485       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.545743       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:35:57.545783       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.570764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 12:35:57.571368       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.584544       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 12:35:57.584858       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:35:57.737519       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 12:35:57.737575       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:00.547039       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:57:00 embed-certs-346100 kubelet[3452]: E0127 12:57:00.028351    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-h7zkk_kubernetes-dashboard(dd2d183f-f004-437e-88d5-aa601fcd656e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-h7zkk" podUID="dd2d183f-f004-437e-88d5-aa601fcd656e"
	Jan 27 12:57:06 embed-certs-346100 kubelet[3452]: E0127 12:57:06.028346    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-w8fsn" podUID="3a78ab43-37b0-4dc0-89a9-59a558ef997c"
	Jan 27 12:57:13 embed-certs-346100 kubelet[3452]: I0127 12:57:13.030766    3452 scope.go:117] "RemoveContainer" containerID="d3db57958f8dabaa86b6711f7b77508b30d37fb1b2d9fc6c16eb997a97f1c2f0"
	Jan 27 12:57:13 embed-certs-346100 kubelet[3452]: E0127 12:57:13.031458    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-h7zkk_kubernetes-dashboard(dd2d183f-f004-437e-88d5-aa601fcd656e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-h7zkk" podUID="dd2d183f-f004-437e-88d5-aa601fcd656e"
	Jan 27 12:57:21 embed-certs-346100 kubelet[3452]: E0127 12:57:21.039105    3452 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 12:57:21 embed-certs-346100 kubelet[3452]: E0127 12:57:21.039504    3452 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 12:57:21 embed-certs-346100 kubelet[3452]: E0127 12:57:21.039865    3452 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb8nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-w8fsn_kube-system(3a78ab43-37b0-4dc0-89a9-59a558ef997c): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 12:57:21 embed-certs-346100 kubelet[3452]: E0127 12:57:21.041268    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-w8fsn" podUID="3a78ab43-37b0-4dc0-89a9-59a558ef997c"
	Jan 27 12:57:25 embed-certs-346100 kubelet[3452]: I0127 12:57:25.028660    3452 scope.go:117] "RemoveContainer" containerID="d3db57958f8dabaa86b6711f7b77508b30d37fb1b2d9fc6c16eb997a97f1c2f0"
	Jan 27 12:57:26 embed-certs-346100 kubelet[3452]: I0127 12:57:26.110034    3452 scope.go:117] "RemoveContainer" containerID="d3db57958f8dabaa86b6711f7b77508b30d37fb1b2d9fc6c16eb997a97f1c2f0"
	Jan 27 12:57:26 embed-certs-346100 kubelet[3452]: I0127 12:57:26.110686    3452 scope.go:117] "RemoveContainer" containerID="5ef8615c6d82944a53aa9ddb0b9d8100d221c24ee40fb9c997fc4049ce2185da"
	Jan 27 12:57:26 embed-certs-346100 kubelet[3452]: E0127 12:57:26.110888    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-h7zkk_kubernetes-dashboard(dd2d183f-f004-437e-88d5-aa601fcd656e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-h7zkk" podUID="dd2d183f-f004-437e-88d5-aa601fcd656e"
	Jan 27 12:57:28 embed-certs-346100 kubelet[3452]: I0127 12:57:28.116298    3452 scope.go:117] "RemoveContainer" containerID="5ef8615c6d82944a53aa9ddb0b9d8100d221c24ee40fb9c997fc4049ce2185da"
	Jan 27 12:57:28 embed-certs-346100 kubelet[3452]: E0127 12:57:28.116455    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-h7zkk_kubernetes-dashboard(dd2d183f-f004-437e-88d5-aa601fcd656e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-h7zkk" podUID="dd2d183f-f004-437e-88d5-aa601fcd656e"
	Jan 27 12:57:34 embed-certs-346100 kubelet[3452]: E0127 12:57:34.028664    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-w8fsn" podUID="3a78ab43-37b0-4dc0-89a9-59a558ef997c"
	Jan 27 12:57:43 embed-certs-346100 kubelet[3452]: I0127 12:57:43.029379    3452 scope.go:117] "RemoveContainer" containerID="5ef8615c6d82944a53aa9ddb0b9d8100d221c24ee40fb9c997fc4049ce2185da"
	Jan 27 12:57:43 embed-certs-346100 kubelet[3452]: E0127 12:57:43.030350    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-h7zkk_kubernetes-dashboard(dd2d183f-f004-437e-88d5-aa601fcd656e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-h7zkk" podUID="dd2d183f-f004-437e-88d5-aa601fcd656e"
	Jan 27 12:57:49 embed-certs-346100 kubelet[3452]: E0127 12:57:49.031067    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-w8fsn" podUID="3a78ab43-37b0-4dc0-89a9-59a558ef997c"
	Jan 27 12:57:55 embed-certs-346100 kubelet[3452]: I0127 12:57:55.027642    3452 scope.go:117] "RemoveContainer" containerID="5ef8615c6d82944a53aa9ddb0b9d8100d221c24ee40fb9c997fc4049ce2185da"
	Jan 27 12:57:55 embed-certs-346100 kubelet[3452]: E0127 12:57:55.027835    3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-h7zkk_kubernetes-dashboard(dd2d183f-f004-437e-88d5-aa601fcd656e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-h7zkk" podUID="dd2d183f-f004-437e-88d5-aa601fcd656e"
	Jan 27 12:57:59 embed-certs-346100 kubelet[3452]: E0127 12:57:59.051738    3452 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:57:59 embed-certs-346100 kubelet[3452]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:57:59 embed-certs-346100 kubelet[3452]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:57:59 embed-certs-346100 kubelet[3452]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:57:59 embed-certs-346100 kubelet[3452]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> kubernetes-dashboard [81a749edca940584da30bdcb3d8ad981f4e14ce73a329101c3dfce294564faac] <==
	2025/01/27 12:45:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:46:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:46:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:47:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:47:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:48:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:48:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:49:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:49:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:50:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:50:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:51:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:51:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:52:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:52:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:53:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:53:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:54:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:54:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:55:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:55:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:56:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:56:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:57:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:57:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b2f1377c22829c4ec19a7ee29fc5f18c306ad422928dab9da5f018914af5a1d0] <==
	I0127 12:36:06.251400       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 12:36:06.324845       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 12:36:06.326005       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 12:36:06.398878       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 12:36:06.399577       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb98b945-5022-4d92-8a91-dc9960b974c2", APIVersion:"v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-346100_19d86aed-271b-46d3-85b6-fa861b4140d8 became leader
	I0127 12:36:06.400043       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-346100_19d86aed-271b-46d3-85b6-fa861b4140d8!
	I0127 12:36:06.501057       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-346100_19d86aed-271b-46d3-85b6-fa861b4140d8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-346100 -n embed-certs-346100
E0127 12:58:01.255822  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-346100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-w8fsn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-346100 describe pod metrics-server-f79f97bbb-w8fsn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-346100 describe pod metrics-server-f79f97bbb-w8fsn: exit status 1 (65.05403ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-w8fsn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-346100 describe pod metrics-server-f79f97bbb-w8fsn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1619.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1629.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-887672 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 12:31:23.465031  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:25.015731  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:30.137552  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:31.913405  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:31.919833  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:31.931208  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:31.952595  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:31.994052  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:32.075608  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:32.236921  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:32.558681  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:33.200415  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:34.481996  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:37.043512  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:40.379364  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:42.165333  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-887672 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (27m7.937246867s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-887672] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-887672" primary control-plane node in "default-k8s-diff-port-887672" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-887672" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-887672 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:31:22.965865  532844 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:31:22.966098  532844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:22.966107  532844 out.go:358] Setting ErrFile to fd 2...
	I0127 12:31:22.966117  532844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:22.966275  532844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 12:31:22.966801  532844 out.go:352] Setting JSON to false
	I0127 12:31:22.967702  532844 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11626,"bootTime":1737969457,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:31:22.967801  532844 start.go:139] virtualization: kvm guest
	I0127 12:31:22.970091  532844 out.go:177] * [default-k8s-diff-port-887672] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:31:22.971383  532844 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:31:22.971399  532844 notify.go:220] Checking for updates...
	I0127 12:31:22.973645  532844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:31:22.974854  532844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:31:22.976088  532844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 12:31:22.977246  532844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:31:22.978429  532844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:31:22.979827  532844 config.go:182] Loaded profile config "default-k8s-diff-port-887672": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:31:22.980182  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:31:22.980251  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:22.997032  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0127 12:31:22.997528  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:22.998179  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:31:22.998201  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:22.998638  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:22.998855  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:31:22.999192  532844 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:31:22.999617  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:31:22.999668  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:23.014124  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0127 12:31:23.014527  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:23.015099  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:31:23.015141  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:23.015566  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:23.015768  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:31:23.055465  532844 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:31:23.056658  532844 start.go:297] selected driver: kvm2
	I0127 12:31:23.056686  532844 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-887672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-887672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:23.056877  532844 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:31:23.057640  532844 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:23.057724  532844 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-471120/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:31:23.077225  532844 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:31:23.077798  532844 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:31:23.077852  532844 cni.go:84] Creating CNI manager for ""
	I0127 12:31:23.077918  532844 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:31:23.077974  532844 start.go:340] cluster config:
	{Name:default-k8s-diff-port-887672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-887672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:23.078159  532844 iso.go:125] acquiring lock: {Name:mkc6ca3cbb5528e67f6dc9da0188f358e9fee620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:31:23.079876  532844 out.go:177] * Starting "default-k8s-diff-port-887672" primary control-plane node in "default-k8s-diff-port-887672" cluster
	I0127 12:31:23.081008  532844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:31:23.081051  532844 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 12:31:23.081061  532844 cache.go:56] Caching tarball of preloaded images
	I0127 12:31:23.081193  532844 preload.go:172] Found /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:31:23.081212  532844 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 12:31:23.081345  532844 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/default-k8s-diff-port-887672/config.json ...
	I0127 12:31:23.081593  532844 start.go:360] acquireMachinesLock for default-k8s-diff-port-887672: {Name:mk818835aef0de701295cc2c98fea95e1be33202 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:31:23.081679  532844 start.go:364] duration metric: took 56.485µs to acquireMachinesLock for "default-k8s-diff-port-887672"
	I0127 12:31:23.081706  532844 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:31:23.081712  532844 fix.go:54] fixHost starting: 
	I0127 12:31:23.082100  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:31:23.082165  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:23.097298  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0127 12:31:23.097667  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:23.098138  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:31:23.098167  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:23.098441  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:23.098650  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:31:23.098844  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:31:23.100627  532844 fix.go:112] recreateIfNeeded on default-k8s-diff-port-887672: state=Stopped err=<nil>
	I0127 12:31:23.100660  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	W0127 12:31:23.100846  532844 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:31:23.102595  532844 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-887672" ...
	I0127 12:31:23.103670  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Start
	I0127 12:31:23.103885  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) starting domain...
	I0127 12:31:23.103910  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) ensuring networks are active...
	I0127 12:31:23.104693  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Ensuring network default is active
	I0127 12:31:23.105263  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Ensuring network mk-default-k8s-diff-port-887672 is active
	I0127 12:31:23.105670  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) getting domain XML...
	I0127 12:31:23.106397  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) creating domain...
	I0127 12:31:24.475524  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) waiting for IP...
	I0127 12:31:24.476376  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:24.476853  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:24.476948  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:24.476848  532879 retry.go:31] will retry after 306.374593ms: waiting for domain to come up
	I0127 12:31:24.785496  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:24.786150  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:24.786187  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:24.786101  532879 retry.go:31] will retry after 387.358698ms: waiting for domain to come up
	I0127 12:31:25.174527  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:25.175049  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:25.175082  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:25.174989  532879 retry.go:31] will retry after 368.03552ms: waiting for domain to come up
	I0127 12:31:25.544667  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:25.545298  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:25.545334  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:25.545238  532879 retry.go:31] will retry after 578.544586ms: waiting for domain to come up
	I0127 12:31:26.125317  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:26.125784  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:26.125832  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:26.125750  532879 retry.go:31] will retry after 737.641255ms: waiting for domain to come up
	I0127 12:31:26.865025  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:26.865479  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:26.865506  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:26.865443  532879 retry.go:31] will retry after 757.856829ms: waiting for domain to come up
	I0127 12:31:27.625429  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:27.625995  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:27.626032  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:27.625960  532879 retry.go:31] will retry after 948.951574ms: waiting for domain to come up
	I0127 12:31:28.576628  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:28.577181  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:28.577231  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:28.577167  532879 retry.go:31] will retry after 1.231361156s: waiting for domain to come up
	I0127 12:31:29.810275  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:29.810759  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:29.810791  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:29.810721  532879 retry.go:31] will retry after 1.785567237s: waiting for domain to come up
	I0127 12:31:31.598118  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:31.598774  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:31.598807  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:31.598724  532879 retry.go:31] will retry after 1.847058882s: waiting for domain to come up
	I0127 12:31:33.447412  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:33.447933  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:33.447998  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:33.447910  532879 retry.go:31] will retry after 2.675794477s: waiting for domain to come up
	I0127 12:31:36.124835  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:36.125296  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:36.125352  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:36.125258  532879 retry.go:31] will retry after 3.081891048s: waiting for domain to come up
	I0127 12:31:39.208693  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:39.209276  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | unable to find current IP address of domain default-k8s-diff-port-887672 in network mk-default-k8s-diff-port-887672
	I0127 12:31:39.209301  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | I0127 12:31:39.209226  532879 retry.go:31] will retry after 3.705071961s: waiting for domain to come up
	I0127 12:31:42.919244  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:42.919874  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) found domain IP: 192.168.61.130
	I0127 12:31:42.919897  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) reserving static IP address...
	I0127 12:31:42.919913  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has current primary IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:42.920267  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-887672", mac: "52:54:00:65:54:e1", ip: "192.168.61.130"} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:42.920291  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) reserved static IP address 192.168.61.130 for domain default-k8s-diff-port-887672
	I0127 12:31:42.920304  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | skip adding static IP to network mk-default-k8s-diff-port-887672 - found existing host DHCP lease matching {name: "default-k8s-diff-port-887672", mac: "52:54:00:65:54:e1", ip: "192.168.61.130"}
	I0127 12:31:42.920319  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Getting to WaitForSSH function...
	I0127 12:31:42.920333  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) waiting for SSH...
	I0127 12:31:42.922454  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:42.922786  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:42.922818  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:42.922943  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Using SSH client type: external
	I0127 12:31:42.922976  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa (-rw-------)
	I0127 12:31:42.923003  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:31:42.923021  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | About to run SSH command:
	I0127 12:31:42.923031  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | exit 0
	I0127 12:31:43.052257  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | SSH cmd err, output: <nil>: 
	I0127 12:31:43.052639  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetConfigRaw
	I0127 12:31:43.053332  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetIP
	I0127 12:31:43.055688  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.055979  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.056013  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.056265  532844 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/default-k8s-diff-port-887672/config.json ...
	I0127 12:31:43.056472  532844 machine.go:93] provisionDockerMachine start ...
	I0127 12:31:43.056494  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:31:43.056728  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:31:43.058950  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.059281  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.059308  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.059442  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:31:43.059625  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.059795  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.059918  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:31:43.060095  532844 main.go:141] libmachine: Using SSH client type: native
	I0127 12:31:43.060281  532844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I0127 12:31:43.060293  532844 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:31:43.168816  532844 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:31:43.168857  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetMachineName
	I0127 12:31:43.169124  532844 buildroot.go:166] provisioning hostname "default-k8s-diff-port-887672"
	I0127 12:31:43.169153  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetMachineName
	I0127 12:31:43.169353  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:31:43.172459  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.172874  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.172904  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.173094  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:31:43.173309  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.173514  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.173660  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:31:43.173834  532844 main.go:141] libmachine: Using SSH client type: native
	I0127 12:31:43.174020  532844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I0127 12:31:43.174037  532844 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-887672 && echo "default-k8s-diff-port-887672" | sudo tee /etc/hostname
	I0127 12:31:43.296725  532844 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-887672
	
	I0127 12:31:43.296783  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:31:43.299519  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.299989  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.300022  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.300238  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:31:43.300426  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.300590  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.300714  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:31:43.300924  532844 main.go:141] libmachine: Using SSH client type: native
	I0127 12:31:43.301097  532844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I0127 12:31:43.301119  532844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-887672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-887672/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-887672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:31:43.421854  532844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:31:43.421891  532844 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-471120/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-471120/.minikube}
	I0127 12:31:43.421944  532844 buildroot.go:174] setting up certificates
	I0127 12:31:43.421963  532844 provision.go:84] configureAuth start
	I0127 12:31:43.421984  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetMachineName
	I0127 12:31:43.422363  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetIP
	I0127 12:31:43.425072  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.425482  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.425548  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.425647  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:31:43.427734  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.428105  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.428141  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.428250  532844 provision.go:143] copyHostCerts
	I0127 12:31:43.428299  532844 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem, removing ...
	I0127 12:31:43.428308  532844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem
	I0127 12:31:43.428360  532844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem (1082 bytes)
	I0127 12:31:43.428450  532844 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem, removing ...
	I0127 12:31:43.428458  532844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem
	I0127 12:31:43.428477  532844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem (1123 bytes)
	I0127 12:31:43.428530  532844 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem, removing ...
	I0127 12:31:43.428537  532844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem
	I0127 12:31:43.428554  532844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem (1679 bytes)
	I0127 12:31:43.428599  532844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-887672 san=[127.0.0.1 192.168.61.130 default-k8s-diff-port-887672 localhost minikube]
	I0127 12:31:43.529825  532844 provision.go:177] copyRemoteCerts
	I0127 12:31:43.529884  532844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:31:43.529911  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:31:43.532485  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.532798  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.532826  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.532970  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:31:43.533171  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.533328  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:31:43.533437  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:31:43.617662  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:31:43.640266  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 12:31:43.662257  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:31:43.684138  532844 provision.go:87] duration metric: took 262.160869ms to configureAuth
	I0127 12:31:43.684164  532844 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:31:43.684338  532844 config.go:182] Loaded profile config "default-k8s-diff-port-887672": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:31:43.684350  532844 machine.go:96] duration metric: took 627.865241ms to provisionDockerMachine
	I0127 12:31:43.684363  532844 start.go:293] postStartSetup for "default-k8s-diff-port-887672" (driver="kvm2")
	I0127 12:31:43.684377  532844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:31:43.684414  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:31:43.684713  532844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:31:43.684755  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:31:43.687551  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.687893  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.687922  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.688110  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:31:43.688310  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.688482  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:31:43.688660  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:31:43.770980  532844 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:31:43.774657  532844 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:31:43.774683  532844 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/addons for local assets ...
	I0127 12:31:43.774759  532844 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/files for local assets ...
	I0127 12:31:43.774865  532844 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem -> 4783872.pem in /etc/ssl/certs
	I0127 12:31:43.775091  532844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:31:43.785047  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:31:43.806504  532844 start.go:296] duration metric: took 122.126446ms for postStartSetup
	I0127 12:31:43.806543  532844 fix.go:56] duration metric: took 20.724831721s for fixHost
	I0127 12:31:43.806566  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:31:43.809446  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.809900  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.809932  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.810174  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:31:43.810420  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.810594  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.810735  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:31:43.810897  532844 main.go:141] libmachine: Using SSH client type: native
	I0127 12:31:43.811067  532844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I0127 12:31:43.811077  532844 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:31:43.921034  532844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981103.898159482
	
	I0127 12:31:43.921055  532844 fix.go:216] guest clock: 1737981103.898159482
	I0127 12:31:43.921061  532844 fix.go:229] Guest: 2025-01-27 12:31:43.898159482 +0000 UTC Remote: 2025-01-27 12:31:43.806548193 +0000 UTC m=+20.883781011 (delta=91.611289ms)
	I0127 12:31:43.921079  532844 fix.go:200] guest clock delta is within tolerance: 91.611289ms
	I0127 12:31:43.921086  532844 start.go:83] releasing machines lock for "default-k8s-diff-port-887672", held for 20.839390345s
	I0127 12:31:43.921118  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:31:43.921455  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetIP
	I0127 12:31:43.924056  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.924479  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.924509  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.924625  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:31:43.925119  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:31:43.925295  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:31:43.925397  532844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:31:43.925448  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:31:43.925551  532844 ssh_runner.go:195] Run: cat /version.json
	I0127 12:31:43.925578  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:31:43.928112  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.928443  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.928472  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.928517  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.928566  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:31:43.928750  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.928925  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:43.928939  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:31:43.928954  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:43.929096  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:31:43.929116  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:31:43.929265  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:31:43.929418  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:31:43.929611  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:31:44.013288  532844 ssh_runner.go:195] Run: systemctl --version
	I0127 12:31:44.048648  532844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:31:44.054304  532844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:31:44.054387  532844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:31:44.069318  532844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:31:44.069352  532844 start.go:495] detecting cgroup driver to use...
	I0127 12:31:44.069414  532844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:31:44.104814  532844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:31:44.116832  532844 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:31:44.116879  532844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:31:44.129489  532844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:31:44.141776  532844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:31:44.258506  532844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:31:44.391928  532844 docker.go:233] disabling docker service ...
	I0127 12:31:44.392002  532844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:31:44.405395  532844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:31:44.416950  532844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:31:44.548440  532844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:31:44.674462  532844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:31:44.693644  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:31:44.711654  532844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:31:44.721823  532844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:31:44.731703  532844 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:31:44.731768  532844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:31:44.741384  532844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:31:44.751679  532844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:31:44.760956  532844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:31:44.770943  532844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:31:44.781175  532844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:31:44.793231  532844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:31:44.804954  532844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:31:44.816844  532844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:31:44.826333  532844 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:31:44.826393  532844 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:31:44.839816  532844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:31:44.848811  532844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:31:44.998397  532844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:31:45.026192  532844 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 12:31:45.026263  532844 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:31:45.030512  532844 retry.go:31] will retry after 669.910775ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 12:31:45.701441  532844 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:31:45.706668  532844 start.go:563] Will wait 60s for crictl version
	I0127 12:31:45.706718  532844 ssh_runner.go:195] Run: which crictl
	I0127 12:31:45.710232  532844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:31:45.752289  532844 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 12:31:45.752353  532844 ssh_runner.go:195] Run: containerd --version
	I0127 12:31:45.775943  532844 ssh_runner.go:195] Run: containerd --version
	I0127 12:31:45.801413  532844 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 12:31:45.802652  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetIP
	I0127 12:31:45.805491  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:45.805882  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:31:45.805920  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:31:45.806099  532844 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 12:31:45.809694  532844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:31:45.821653  532844 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-887672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-887
672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:31:45.821759  532844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:31:45.821801  532844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:31:45.858623  532844 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:31:45.858648  532844 containerd.go:534] Images already preloaded, skipping extraction
	I0127 12:31:45.858706  532844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:31:45.892128  532844 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:31:45.892152  532844 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:31:45.892161  532844 kubeadm.go:934] updating node { 192.168.61.130 8444 v1.32.1 containerd true true} ...
	I0127 12:31:45.892278  532844 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-887672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-887672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:31:45.892346  532844 ssh_runner.go:195] Run: sudo crictl info
	I0127 12:31:45.923048  532844 cni.go:84] Creating CNI manager for ""
	I0127 12:31:45.923069  532844 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:31:45.923078  532844 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:31:45.923098  532844 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.130 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-887672 NodeName:default-k8s-diff-port-887672 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:31:45.923198  532844 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.130
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-887672"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.130"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.130"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:31:45.923256  532844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:31:45.932670  532844 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:31:45.932756  532844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:31:45.944245  532844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I0127 12:31:45.959362  532844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:31:45.974406  532844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2324 bytes)
	I0127 12:31:45.989754  532844 ssh_runner.go:195] Run: grep 192.168.61.130	control-plane.minikube.internal$ /etc/hosts
	I0127 12:31:45.993272  532844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:31:46.005012  532844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:31:46.127670  532844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:31:46.145189  532844 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/default-k8s-diff-port-887672 for IP: 192.168.61.130
	I0127 12:31:46.145215  532844 certs.go:194] generating shared ca certs ...
	I0127 12:31:46.145238  532844 certs.go:226] acquiring lock for ca certs: {Name:mk02d117412837bd489768267e2b174e6c3ff6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:31:46.145440  532844 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key
	I0127 12:31:46.145516  532844 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key
	I0127 12:31:46.145535  532844 certs.go:256] generating profile certs ...
	I0127 12:31:46.145643  532844 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/default-k8s-diff-port-887672/client.key
	I0127 12:31:46.145719  532844 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/default-k8s-diff-port-887672/apiserver.key.730023e5
	I0127 12:31:46.145779  532844 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/default-k8s-diff-port-887672/proxy-client.key
	I0127 12:31:46.145936  532844 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem (1338 bytes)
	W0127 12:31:46.145983  532844 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387_empty.pem, impossibly tiny 0 bytes
	I0127 12:31:46.145998  532844 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:31:46.146022  532844 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:31:46.146049  532844 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:31:46.146076  532844 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem (1679 bytes)
	I0127 12:31:46.146127  532844 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:31:46.146813  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:31:46.183312  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:31:46.208902  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:31:46.236908  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:31:46.261218  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/default-k8s-diff-port-887672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 12:31:46.298824  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/default-k8s-diff-port-887672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:31:46.331977  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/default-k8s-diff-port-887672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:31:46.360220  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/default-k8s-diff-port-887672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:31:46.383007  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /usr/share/ca-certificates/4783872.pem (1708 bytes)
	I0127 12:31:46.404689  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:31:46.427238  532844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem --> /usr/share/ca-certificates/478387.pem (1338 bytes)
	I0127 12:31:46.449530  532844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:31:46.465042  532844 ssh_runner.go:195] Run: openssl version
	I0127 12:31:46.470361  532844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:31:46.479673  532844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:31:46.483570  532844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:31:46.483617  532844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:31:46.489187  532844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:31:46.498665  532844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478387.pem && ln -fs /usr/share/ca-certificates/478387.pem /etc/ssl/certs/478387.pem"
	I0127 12:31:46.507811  532844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478387.pem
	I0127 12:31:46.511657  532844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:28 /usr/share/ca-certificates/478387.pem
	I0127 12:31:46.511701  532844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478387.pem
	I0127 12:31:46.516702  532844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478387.pem /etc/ssl/certs/51391683.0"
	I0127 12:31:46.525952  532844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783872.pem && ln -fs /usr/share/ca-certificates/4783872.pem /etc/ssl/certs/4783872.pem"
	I0127 12:31:46.535740  532844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783872.pem
	I0127 12:31:46.539871  532844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:28 /usr/share/ca-certificates/4783872.pem
	I0127 12:31:46.539916  532844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783872.pem
	I0127 12:31:46.544966  532844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783872.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:31:46.554451  532844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:31:46.558402  532844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:31:46.563595  532844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:31:46.568813  532844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:31:46.574090  532844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:31:46.579327  532844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:31:46.584508  532844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:31:46.589640  532844 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-887672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-887672
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:46.589760  532844 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 12:31:46.589803  532844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:31:46.625271  532844 cri.go:89] found id: "bec0a243ae85c794bd4eb4ee98289558b20096ce03e8418bee4f3841bd554b84"
	I0127 12:31:46.625292  532844 cri.go:89] found id: "bc41a8253d28569a2c99c9cfe8531b3dfb886786eaa4bb1ee32b54db6c1bc168"
	I0127 12:31:46.625297  532844 cri.go:89] found id: "5f7d2739d74a9f904ae4433a322ff1f9150ef601008a882d23195cae93ac40b5"
	I0127 12:31:46.625301  532844 cri.go:89] found id: "569d504a1c71a4936c86aa33f0c5d204fd435bdde3ddcde893c8be0913181a2c"
	I0127 12:31:46.625304  532844 cri.go:89] found id: "1124c7c996c6742c150ce2e79805b6757206127ff8eda710bca759825daefe4e"
	I0127 12:31:46.625307  532844 cri.go:89] found id: "4b34c566a4e6125c1d0b91b10108f9d6638424c0481c00b2c4a3621da478a1fc"
	I0127 12:31:46.625309  532844 cri.go:89] found id: "d6c02340896cca4e04b4b8bc5190e8f549aa295f1a6fbff82aacf5611b0407d8"
	I0127 12:31:46.625312  532844 cri.go:89] found id: "772e000c7f8d2665318ca8ac31f4319eb297a071148178fcee982d55318824c3"
	I0127 12:31:46.625314  532844 cri.go:89] found id: ""
	I0127 12:31:46.625352  532844 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 12:31:46.637788  532844 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T12:31:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 12:31:46.637841  532844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:31:46.646258  532844 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:31:46.646273  532844 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:31:46.646306  532844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:31:46.654358  532844 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:31:46.655045  532844 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-887672" does not appear in /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:31:46.655411  532844 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-471120/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-887672" cluster setting kubeconfig missing "default-k8s-diff-port-887672" context setting]
	I0127 12:31:46.656020  532844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:31:46.657422  532844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:31:46.665550  532844 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.130
	I0127 12:31:46.665578  532844 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:31:46.665589  532844 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 12:31:46.665633  532844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:31:46.701806  532844 cri.go:89] found id: "bec0a243ae85c794bd4eb4ee98289558b20096ce03e8418bee4f3841bd554b84"
	I0127 12:31:46.701831  532844 cri.go:89] found id: "bc41a8253d28569a2c99c9cfe8531b3dfb886786eaa4bb1ee32b54db6c1bc168"
	I0127 12:31:46.701847  532844 cri.go:89] found id: "5f7d2739d74a9f904ae4433a322ff1f9150ef601008a882d23195cae93ac40b5"
	I0127 12:31:46.701851  532844 cri.go:89] found id: "569d504a1c71a4936c86aa33f0c5d204fd435bdde3ddcde893c8be0913181a2c"
	I0127 12:31:46.701856  532844 cri.go:89] found id: "1124c7c996c6742c150ce2e79805b6757206127ff8eda710bca759825daefe4e"
	I0127 12:31:46.701861  532844 cri.go:89] found id: "4b34c566a4e6125c1d0b91b10108f9d6638424c0481c00b2c4a3621da478a1fc"
	I0127 12:31:46.701865  532844 cri.go:89] found id: "d6c02340896cca4e04b4b8bc5190e8f549aa295f1a6fbff82aacf5611b0407d8"
	I0127 12:31:46.701869  532844 cri.go:89] found id: "772e000c7f8d2665318ca8ac31f4319eb297a071148178fcee982d55318824c3"
	I0127 12:31:46.701873  532844 cri.go:89] found id: ""
	I0127 12:31:46.701879  532844 cri.go:252] Stopping containers: [bec0a243ae85c794bd4eb4ee98289558b20096ce03e8418bee4f3841bd554b84 bc41a8253d28569a2c99c9cfe8531b3dfb886786eaa4bb1ee32b54db6c1bc168 5f7d2739d74a9f904ae4433a322ff1f9150ef601008a882d23195cae93ac40b5 569d504a1c71a4936c86aa33f0c5d204fd435bdde3ddcde893c8be0913181a2c 1124c7c996c6742c150ce2e79805b6757206127ff8eda710bca759825daefe4e 4b34c566a4e6125c1d0b91b10108f9d6638424c0481c00b2c4a3621da478a1fc d6c02340896cca4e04b4b8bc5190e8f549aa295f1a6fbff82aacf5611b0407d8 772e000c7f8d2665318ca8ac31f4319eb297a071148178fcee982d55318824c3]
	I0127 12:31:46.701935  532844 ssh_runner.go:195] Run: which crictl
	I0127 12:31:46.705600  532844 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 bec0a243ae85c794bd4eb4ee98289558b20096ce03e8418bee4f3841bd554b84 bc41a8253d28569a2c99c9cfe8531b3dfb886786eaa4bb1ee32b54db6c1bc168 5f7d2739d74a9f904ae4433a322ff1f9150ef601008a882d23195cae93ac40b5 569d504a1c71a4936c86aa33f0c5d204fd435bdde3ddcde893c8be0913181a2c 1124c7c996c6742c150ce2e79805b6757206127ff8eda710bca759825daefe4e 4b34c566a4e6125c1d0b91b10108f9d6638424c0481c00b2c4a3621da478a1fc d6c02340896cca4e04b4b8bc5190e8f549aa295f1a6fbff82aacf5611b0407d8 772e000c7f8d2665318ca8ac31f4319eb297a071148178fcee982d55318824c3
	I0127 12:31:46.745931  532844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:31:46.761977  532844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:31:46.773035  532844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:31:46.773049  532844 kubeadm.go:157] found existing configuration files:
	
	I0127 12:31:46.773083  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 12:31:46.781371  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:31:46.781414  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:31:46.791645  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 12:31:46.800793  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:31:46.800839  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:31:46.810291  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 12:31:46.819207  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:31:46.819242  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:31:46.830107  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 12:31:46.838584  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:31:46.838628  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:31:46.847414  532844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:31:46.857185  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:46.969327  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:47.750467  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:47.994463  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:48.077044  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:48.182033  532844 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:31:48.182149  532844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:48.683024  532844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:49.183152  532844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:49.223081  532844 api_server.go:72] duration metric: took 1.041043996s to wait for apiserver process to appear ...
	I0127 12:31:49.223114  532844 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:31:49.223140  532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
	I0127 12:31:49.223744  532844 api_server.go:269] stopped: https://192.168.61.130:8444/healthz: Get "https://192.168.61.130:8444/healthz": dial tcp 192.168.61.130:8444: connect: connection refused
	I0127 12:31:49.724147  532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
	I0127 12:31:52.285626  532844 api_server.go:279] https://192.168.61.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:31:52.285660  532844 api_server.go:103] status: https://192.168.61.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:31:52.285678  532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
	I0127 12:31:52.310366  532844 api_server.go:279] https://192.168.61.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:31:52.310395  532844 api_server.go:103] status: https://192.168.61.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:31:52.724004  532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
	I0127 12:31:52.728939  532844 api_server.go:279] https://192.168.61.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:31:52.728966  532844 api_server.go:103] status: https://192.168.61.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:31:53.223606  532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
	I0127 12:31:53.229497  532844 api_server.go:279] https://192.168.61.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:31:53.229530  532844 api_server.go:103] status: https://192.168.61.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:31:53.724183  532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
	I0127 12:31:53.729116  532844 api_server.go:279] https://192.168.61.130:8444/healthz returned 200:
	ok
	I0127 12:31:53.741599  532844 api_server.go:141] control plane version: v1.32.1
	I0127 12:31:53.741628  532844 api_server.go:131] duration metric: took 4.518505066s to wait for apiserver health ...
	I0127 12:31:53.741640  532844 cni.go:84] Creating CNI manager for ""
	I0127 12:31:53.741648  532844 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:31:53.743308  532844 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:31:53.744540  532844 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:31:53.765085  532844 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:31:53.813905  532844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:31:53.829888  532844 system_pods.go:59] 8 kube-system pods found
	I0127 12:31:53.829951  532844 system_pods.go:61] "coredns-668d6bf9bc-gtq6k" [337012ef-f644-4df0-9900-3ef495baaf19] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:31:53.829967  532844 system_pods.go:61] "etcd-default-k8s-diff-port-887672" [432ce52d-6cc0-425b-a82b-b3e19e0c920b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:31:53.829987  532844 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-887672" [68cb3d9a-e70d-4196-a25f-d780381ab4d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:31:53.829999  532844 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-887672" [1204b039-b7ae-40e5-b06e-f2a38d576059] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:31:53.830015  532844 system_pods.go:61] "kube-proxy-j68n8" [04a0e213-6b28-4d07-a442-0a75ade8f84e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 12:31:53.830030  532844 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-887672" [d89d1804-ce84-4fcf-91fe-aec3dc8f9f47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:31:53.830044  532844 system_pods.go:61] "metrics-server-f79f97bbb-9kjfb" [105e6f5f-bd22-42a3-9aae-39e97087d7f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:31:53.830066  532844 system_pods.go:61] "storage-provisioner" [86e3cbc8-8fa7-4d1f-863f-68e62f34d609] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:31:53.830076  532844 system_pods.go:74] duration metric: took 16.142517ms to wait for pod list to return data ...
	I0127 12:31:53.830103  532844 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:31:53.835520  532844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:31:53.835545  532844 node_conditions.go:123] node cpu capacity is 2
	I0127 12:31:53.835557  532844 node_conditions.go:105] duration metric: took 5.438147ms to run NodePressure ...
	I0127 12:31:53.835574  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:31:54.156438  532844 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 12:31:54.161457  532844 kubeadm.go:739] kubelet initialised
	I0127 12:31:54.161483  532844 kubeadm.go:740] duration metric: took 5.010173ms waiting for restarted kubelet to initialise ...
	I0127 12:31:54.161496  532844 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:31:54.166481  532844 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-gtq6k" in "kube-system" namespace to be "Ready" ...
	I0127 12:31:56.173236  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-gtq6k" in "kube-system" namespace has status "Ready":"False"
	I0127 12:31:58.174077  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-gtq6k" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:00.177185  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-gtq6k" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:02.673595  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-gtq6k" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:04.672699  532844 pod_ready.go:93] pod "coredns-668d6bf9bc-gtq6k" in "kube-system" namespace has status "Ready":"True"
	I0127 12:32:04.672723  532844 pod_ready.go:82] duration metric: took 10.506217257s for pod "coredns-668d6bf9bc-gtq6k" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:04.672755  532844 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:04.678190  532844 pod_ready.go:93] pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:32:04.678211  532844 pod_ready.go:82] duration metric: took 5.446982ms for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:04.678220  532844 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:04.683045  532844 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:32:04.683065  532844 pod_ready.go:82] duration metric: took 4.839155ms for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:04.683074  532844 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:06.689587  532844 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:32:06.689613  532844 pod_ready.go:82] duration metric: took 2.006533109s for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:06.689627  532844 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j68n8" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:06.694086  532844 pod_ready.go:93] pod "kube-proxy-j68n8" in "kube-system" namespace has status "Ready":"True"
	I0127 12:32:06.694117  532844 pod_ready.go:82] duration metric: took 4.483257ms for pod "kube-proxy-j68n8" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:06.694130  532844 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:08.200536  532844 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:32:08.200558  532844 pod_ready.go:82] duration metric: took 1.506419965s for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:08.200568  532844 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" ...
	I0127 12:32:10.214059  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:12.707208  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:15.207273  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:17.207492  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:19.207590  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:21.708336  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:24.207505  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:26.207626  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:28.208038  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:30.708686  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:33.206397  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:35.207491  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:37.707425  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:40.207481  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:42.208236  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:44.707390  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:47.207272  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:49.207598  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:51.207874  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:53.707073  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:56.207917  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:32:58.209049  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:00.707597  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:03.207329  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:05.207602  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:07.207706  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:09.706669  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:11.707566  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:14.207481  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:16.707223  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:18.708010  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:21.207292  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:23.706928  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:25.708025  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:28.207434  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:30.207630  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:32.706593  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:35.206335  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:37.207372  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:39.208389  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:41.707096  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:44.206515  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:46.207544  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:48.708351  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:51.207471  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:53.708990  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:56.206378  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:33:58.206601  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:00.207038  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:02.207129  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:04.207197  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:06.707151  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:08.709062  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:11.207603  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:13.706793  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:15.708357  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:18.206610  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:20.707121  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:23.208026  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:25.708158  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:28.206827  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:30.707167  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:33.206401  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:35.207280  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:37.708401  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:40.206821  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:42.207253  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:44.708377  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:46.708500  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:48.709182  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:51.208623  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:53.706733  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:55.707401  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:34:57.707711  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:00.208728  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:02.706402  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:04.707241  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:07.207967  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:09.706982  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:11.708667  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:13.710357  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:16.208193  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:18.706799  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:21.207988  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:23.706004  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:26.209274  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:28.707766  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:31.207181  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:33.209153  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:35.707999  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:38.206551  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:40.206903  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:42.207781  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:44.707430  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:46.708896  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:48.711637  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:51.208760  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:53.708069  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:56.209743  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:58.707466  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:01.206257  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:03.206663  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:05.207472  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:07.706605  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:08.201622  532844 pod_ready.go:82] duration metric: took 4m0.001032286s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" ...
	E0127 12:36:08.201658  532844 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 12:36:08.201683  532844 pod_ready.go:39] duration metric: took 4m14.040174083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:08.201724  532844 kubeadm.go:597] duration metric: took 4m21.555444284s to restartPrimaryControlPlane
	W0127 12:36:08.201798  532844 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:36:08.201833  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 12:36:10.133466  532844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.93160232s)
	I0127 12:36:10.133550  532844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:36:10.155296  532844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:36:10.170023  532844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:36:10.183165  532844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:36:10.183194  532844 kubeadm.go:157] found existing configuration files:
	
	I0127 12:36:10.183257  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 12:36:10.195175  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:36:10.195253  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:36:10.208349  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 12:36:10.220351  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:36:10.220429  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:36:10.238914  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 12:36:10.254995  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:36:10.255067  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:36:10.266753  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 12:36:10.278422  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:36:10.278490  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:36:10.292279  532844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:36:10.351007  532844 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:36:10.351189  532844 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:36:10.469769  532844 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:36:10.469949  532844 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:36:10.470056  532844 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:36:10.479353  532844 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:36:10.481858  532844 out.go:235]   - Generating certificates and keys ...
	I0127 12:36:10.481959  532844 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:36:10.482038  532844 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:36:10.482135  532844 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:36:10.482236  532844 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:36:10.482358  532844 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:36:10.482442  532844 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:36:10.482525  532844 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:36:10.482633  532844 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:36:10.483039  532844 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:36:10.483619  532844 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:36:10.483746  532844 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:36:10.483829  532844 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:36:10.585561  532844 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:36:10.784195  532844 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:36:10.958020  532844 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:36:11.223196  532844 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:36:11.439416  532844 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:36:11.440271  532844 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:36:11.444236  532844 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:36:11.445766  532844 out.go:235]   - Booting up control plane ...
	I0127 12:36:11.445895  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:36:11.445993  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:36:11.447764  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:36:11.484418  532844 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:36:11.496508  532844 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:36:11.496594  532844 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:36:11.681886  532844 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:36:11.682039  532844 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:36:12.183183  532844 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.076889ms
	I0127 12:36:12.183305  532844 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:36:18.185418  532844 kubeadm.go:310] [api-check] The API server is healthy after 6.002059282s
	I0127 12:36:18.204454  532844 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:36:18.218201  532844 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:36:18.245054  532844 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:36:18.245331  532844 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-887672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:36:18.257186  532844 kubeadm.go:310] [bootstrap-token] Using token: 5yhtlj.kyb5uzy41lrz34us
	I0127 12:36:18.258581  532844 out.go:235]   - Configuring RBAC rules ...
	I0127 12:36:18.258747  532844 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:36:18.265191  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:36:18.272296  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:36:18.285037  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:36:18.285204  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:36:18.285313  532844 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:36:18.593364  532844 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:36:19.042942  532844 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:36:19.593432  532844 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:36:19.594797  532844 kubeadm.go:310] 
	I0127 12:36:19.594875  532844 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:36:19.594888  532844 kubeadm.go:310] 
	I0127 12:36:19.594970  532844 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:36:19.594981  532844 kubeadm.go:310] 
	I0127 12:36:19.595011  532844 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:36:19.595081  532844 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:36:19.595152  532844 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:36:19.595166  532844 kubeadm.go:310] 
	I0127 12:36:19.595239  532844 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:36:19.595246  532844 kubeadm.go:310] 
	I0127 12:36:19.595301  532844 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:36:19.595308  532844 kubeadm.go:310] 
	I0127 12:36:19.595371  532844 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:36:19.595464  532844 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:36:19.595545  532844 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:36:19.595554  532844 kubeadm.go:310] 
	I0127 12:36:19.595667  532844 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:36:19.595757  532844 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:36:19.595767  532844 kubeadm.go:310] 
	I0127 12:36:19.595869  532844 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5yhtlj.kyb5uzy41lrz34us \
	I0127 12:36:19.595998  532844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
	I0127 12:36:19.596017  532844 kubeadm.go:310] 	--control-plane 
	I0127 12:36:19.596021  532844 kubeadm.go:310] 
	I0127 12:36:19.596121  532844 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:36:19.596137  532844 kubeadm.go:310] 
	I0127 12:36:19.596223  532844 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5yhtlj.kyb5uzy41lrz34us \
	I0127 12:36:19.596305  532844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 
	I0127 12:36:19.598645  532844 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:36:19.598687  532844 cni.go:84] Creating CNI manager for ""
	I0127 12:36:19.598696  532844 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:36:19.600188  532844 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:36:19.601391  532844 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:36:19.615483  532844 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:36:19.641045  532844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:36:19.641123  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:19.641161  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-887672 minikube.k8s.io/updated_at=2025_01_27T12_36_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=default-k8s-diff-port-887672 minikube.k8s.io/primary=true
	I0127 12:36:19.655315  532844 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:19.893685  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:20.394472  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:20.893933  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:21.394823  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:21.893992  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:22.393950  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:22.894084  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:23.394506  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:23.893909  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:24.393790  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:24.491305  532844 kubeadm.go:1113] duration metric: took 4.850249048s to wait for elevateKubeSystemPrivileges
	I0127 12:36:24.491356  532844 kubeadm.go:394] duration metric: took 4m37.901720321s to StartCluster
	I0127 12:36:24.491385  532844 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:24.491488  532844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:24.493752  532844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:24.494040  532844 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.130 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:24.494175  532844 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:24.494273  532844 config.go:182] Loaded profile config "default-k8s-diff-port-887672": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:24.494285  532844 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494323  532844 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-887672"
	I0127 12:36:24.494316  532844 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494338  532844 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494372  532844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-887672"
	I0127 12:36:24.494381  532844 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.494394  532844 addons.go:247] addon dashboard should already be in state true
	W0127 12:36:24.494332  532844 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:24.494432  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494463  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494323  532844 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494553  532844 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.494564  532844 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:24.494598  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494863  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494871  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494871  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494905  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.494911  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495037  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.495049  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495123  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495481  532844 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:24.496811  532844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:24.513577  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0127 12:36:24.514115  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.514694  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.514720  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.515161  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.515484  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
	I0127 12:36:24.515836  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0127 12:36:24.515999  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0127 12:36:24.516094  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.516144  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.516192  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.516413  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.516675  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.516695  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.516974  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.516994  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.517001  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.517393  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.517583  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.517647  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.518197  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.518252  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.518469  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.518494  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.518868  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.519422  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.519470  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.521629  532844 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.521653  532844 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:24.521684  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.522040  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.522081  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.534712  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0127 12:36:24.535195  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.536504  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.536527  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.536554  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0127 12:36:24.536902  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.536959  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.537111  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.537597  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.537616  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.537969  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.538145  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.538989  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0127 12:36:24.539580  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.540009  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0127 12:36:24.540196  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.540422  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.540715  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.540879  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.540902  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.540934  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.540948  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.541341  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.541388  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.541685  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.542042  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.542090  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.542251  532844 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:24.542373  532844 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:24.543206  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.543412  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:24.543430  532844 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:24.543460  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.544493  532844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:24.545545  532844 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:24.545643  532844 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:24.545656  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:24.545671  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.546541  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:24.546563  532844 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:24.546584  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.547093  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.547276  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.547478  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.547900  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.548065  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.547944  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.548278  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.549918  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550146  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.550170  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550429  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.550517  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550608  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.550758  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.550914  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.550956  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.550993  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.551165  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.551308  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.551460  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.551595  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.566621  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0127 12:36:24.567007  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.567434  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.567460  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.567879  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.568040  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.569632  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.569844  532844 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:24.569859  532844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:24.569875  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.572937  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.573361  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.573377  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.573577  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.573757  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.573888  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.574044  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.747290  532844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:24.779846  532844 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-887672" to be "Ready" ...
	I0127 12:36:24.813551  532844 node_ready.go:49] node "default-k8s-diff-port-887672" has status "Ready":"True"
	I0127 12:36:24.813582  532844 node_ready.go:38] duration metric: took 33.68566ms for node "default-k8s-diff-port-887672" to be "Ready" ...
	I0127 12:36:24.813594  532844 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:24.825398  532844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:24.855841  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:24.855869  532844 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:24.865288  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:24.890399  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:24.907963  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:24.907990  532844 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:24.923409  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:24.923434  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:24.967186  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:24.967211  532844 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:25.003133  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:25.003167  532844 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:25.031491  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:25.031515  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:25.086171  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:25.086201  532844 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:25.147825  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:25.152298  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:25.152324  532844 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:25.203235  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:25.203264  532844 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:25.242547  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:25.242578  532844 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:25.281622  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:25.281659  532844 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:25.312416  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:25.312444  532844 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:25.365802  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:25.651534  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.651566  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.651590  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.651612  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.651995  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652009  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652020  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652021  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652033  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.652036  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652040  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.652047  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652055  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.652063  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.652511  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652572  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652594  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652580  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652592  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652796  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.667377  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.667403  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.667693  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.667709  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.974214  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.974246  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.974553  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.974574  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.974591  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.974600  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.974992  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.975017  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.975032  532844 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-887672"
	I0127 12:36:26.960702  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:27.097489  532844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.731632212s)
	I0127 12:36:27.097551  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:27.097567  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:27.097886  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:27.097909  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:27.097909  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:27.097917  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:27.097935  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:27.098221  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:27.098291  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:27.099837  532844 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-887672 addons enable metrics-server
	
	I0127 12:36:27.101354  532844 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 12:36:27.102395  532844 addons.go:514] duration metric: took 2.608238219s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 12:36:29.331790  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:31.334726  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:33.834237  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:34.374688  532844 pod_ready.go:93] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.374713  532844 pod_ready.go:82] duration metric: took 9.549290033s for pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.374725  532844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.399299  532844 pod_ready.go:93] pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.399323  532844 pod_ready.go:82] duration metric: took 24.589743ms for pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.399332  532844 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.421329  532844 pod_ready.go:93] pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.421359  532844 pod_ready.go:82] duration metric: took 22.019877ms for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.421399  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.427922  532844 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.427946  532844 pod_ready.go:82] duration metric: took 6.537775ms for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.427957  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.447675  532844 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.447701  532844 pod_ready.go:82] duration metric: took 19.736139ms for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.447713  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xl46c" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.729783  532844 pod_ready.go:93] pod "kube-proxy-xl46c" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.729827  532844 pod_ready.go:82] duration metric: took 282.092476ms for pod "kube-proxy-xl46c" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.729841  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:35.128755  532844 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:35.128781  532844 pod_ready.go:82] duration metric: took 398.931642ms for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:35.128790  532844 pod_ready.go:39] duration metric: took 10.315186396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:35.128806  532844 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:35.128870  532844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:35.148548  532844 api_server.go:72] duration metric: took 10.654456335s to wait for apiserver process to appear ...
	I0127 12:36:35.148574  532844 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:35.148597  532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
	I0127 12:36:35.156175  532844 api_server.go:279] https://192.168.61.130:8444/healthz returned 200:
	ok
	I0127 12:36:35.157842  532844 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:35.157866  532844 api_server.go:131] duration metric: took 9.283401ms to wait for apiserver health ...
	I0127 12:36:35.157875  532844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:35.339567  532844 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:35.339606  532844 system_pods.go:61] "coredns-668d6bf9bc-jc882" [cc7b1851-f0b2-406d-b972-155b02dcefc6] Running
	I0127 12:36:35.339614  532844 system_pods.go:61] "coredns-668d6bf9bc-s6rln" [553e1b5c-1bb3-48f4-bf25-6837dae6b627] Running
	I0127 12:36:35.339620  532844 system_pods.go:61] "etcd-default-k8s-diff-port-887672" [cfe71b01-c4c5-4772-904f-0f22ebdc9481] Running
	I0127 12:36:35.339625  532844 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-887672" [09952f8b-2235-45c2-aac8-328369a341dd] Running
	I0127 12:36:35.339631  532844 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-887672" [6aee732f-0e4f-4362-b2d5-38e533a146c4] Running
	I0127 12:36:35.339636  532844 system_pods.go:61] "kube-proxy-xl46c" [c2ddd14b-3d9e-4985-935e-5f64d188e68e] Running
	I0127 12:36:35.339641  532844 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-887672" [7a436b79-cc6a-4311-9cb6-24537ed6aed0] Running
	I0127 12:36:35.339652  532844 system_pods.go:61] "metrics-server-f79f97bbb-twqz4" [107a2af6-937d-4c95-a8dd-f47f59dd3afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:35.339659  532844 system_pods.go:61] "storage-provisioner" [ebd493f5-ab93-4083-8174-aceb44741e99] Running
	I0127 12:36:35.339675  532844 system_pods.go:74] duration metric: took 181.791009ms to wait for pod list to return data ...
	I0127 12:36:35.339689  532844 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:35.528977  532844 default_sa.go:45] found service account: "default"
	I0127 12:36:35.529018  532844 default_sa.go:55] duration metric: took 189.31757ms for default service account to be created ...
	I0127 12:36:35.529033  532844 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:36:35.732388  532844 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-887672 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-887672 -n default-k8s-diff-port-887672
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-887672 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-887672 logs -n 25: (1.182909914s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-215237                  | no-preload-215237            | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC | 27 Jan 25 12:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-215237                                   | no-preload-215237            | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-346100                 | embed-certs-346100           | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-346100                                  | embed-certs-346100           | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-887672       | default-k8s-diff-port-887672 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-887672 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC |                     |
	|         | default-k8s-diff-port-887672                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-858845             | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-858845 image                           | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	| delete  | -p old-k8s-version-858845                              | old-k8s-version-858845       | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	| start   | -p newest-cni-610630 --memory=2200 --alsologtostderr   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-610630             | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-610630                  | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-610630 --memory=2200 --alsologtostderr   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:36 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-610630 image list                           | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	| delete  | -p newest-cni-610630                                   | newest-cni-610630            | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
	| delete  | -p no-preload-215237                                   | no-preload-215237            | jenkins | v1.35.0 | 27 Jan 25 12:57 UTC | 27 Jan 25 12:57 UTC |
	| delete  | -p embed-certs-346100                                  | embed-certs-346100           | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:35:43
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:35:43.059479  534894 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:35:43.059651  534894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:35:43.059664  534894 out.go:358] Setting ErrFile to fd 2...
	I0127 12:35:43.059671  534894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:35:43.059931  534894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 12:35:43.061091  534894 out.go:352] Setting JSON to false
	I0127 12:35:43.062772  534894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11886,"bootTime":1737969457,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:35:43.062914  534894 start.go:139] virtualization: kvm guest
	I0127 12:35:43.064927  534894 out.go:177] * [newest-cni-610630] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:35:43.066246  534894 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:35:43.066268  534894 notify.go:220] Checking for updates...
	I0127 12:35:43.068595  534894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:35:43.069716  534894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:35:43.070810  534894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 12:35:43.071853  534894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:35:43.072978  534894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:35:43.074838  534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:35:43.075450  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:43.075519  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:43.091909  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35965
	I0127 12:35:43.093149  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:43.093802  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:35:43.093834  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:43.094269  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:43.094579  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:35:43.094848  534894 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:35:43.095161  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:43.095202  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:43.110695  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0127 12:35:43.111212  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:43.111903  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:35:43.111935  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:43.112295  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:43.112533  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:35:43.153545  534894 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:35:40.799070  532344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:35:40.816802  532344 node_ready.go:35] waiting up to 6m0s for node "no-preload-215237" to be "Ready" ...
	I0127 12:35:40.842677  532344 node_ready.go:49] node "no-preload-215237" has status "Ready":"True"
	I0127 12:35:40.842703  532344 node_ready.go:38] duration metric: took 25.862086ms for node "no-preload-215237" to be "Ready" ...
	I0127 12:35:40.842716  532344 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:40.853263  532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:40.876376  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:35:40.876407  532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:35:40.898870  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:35:40.903314  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:35:40.916620  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:35:40.916649  532344 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:35:41.067992  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:35:41.068023  532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:35:41.072700  532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:35:41.072728  532344 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:35:41.155398  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:35:41.155426  532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:35:41.194887  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:35:41.230877  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:35:41.230909  532344 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:35:41.313376  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:35:41.313400  532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:35:41.442010  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:35:41.442049  532344 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:35:41.486996  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:35:41.487028  532344 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:35:41.616020  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:35:41.616057  532344 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:35:41.690855  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:35:41.690886  532344 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:35:41.720821  532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:35:41.720851  532344 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:35:41.754849  532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:35:41.990168  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091255427s)
	I0127 12:35:41.990220  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086878371s)
	I0127 12:35:41.990249  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990262  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990249  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990370  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990668  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.990683  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.990719  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.990725  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.990733  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.990747  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990758  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.990821  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.990734  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:41.990857  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:41.991027  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.991042  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:41.992412  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:41.992462  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:41.992477  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.004951  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.004969  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.005238  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.005254  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.005271  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:42.472191  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.277235038s)
	I0127 12:35:42.472268  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.472283  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.472619  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:42.472665  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.472683  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.472697  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:42.472706  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:42.472985  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:42.473012  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:42.473024  532344 addons.go:479] Verifying addon metrics-server=true in "no-preload-215237"
	I0127 12:35:42.890307  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:43.165047  532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.410145551s)
	I0127 12:35:43.165103  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:43.165123  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:43.165633  532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
	I0127 12:35:43.165657  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:43.165676  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:43.165692  532344 main.go:141] libmachine: Making call to close driver server
	I0127 12:35:43.165705  532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
	I0127 12:35:43.165941  532344 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:35:43.165957  532344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:35:43.167364  532344 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-215237 addons enable metrics-server
	
	I0127 12:35:43.168535  532344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 12:35:43.154513  534894 start.go:297] selected driver: kvm2
	I0127 12:35:43.154531  534894 start.go:901] validating driver "kvm2" against &{Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:35:43.154653  534894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:35:43.155362  534894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:35:43.155469  534894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-471120/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:35:43.172617  534894 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:35:43.173026  534894 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 12:35:43.173063  534894 cni.go:84] Creating CNI manager for ""
	I0127 12:35:43.173110  534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:35:43.173145  534894 start.go:340] cluster config:
	{Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:35:43.173269  534894 iso.go:125] acquiring lock: {Name:mkc6ca3cbb5528e67f6dc9da0188f358e9fee620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:35:43.174747  534894 out.go:177] * Starting "newest-cni-610630" primary control-plane node in "newest-cni-610630" cluster
	I0127 12:35:43.175803  534894 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:35:43.175846  534894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 12:35:43.175857  534894 cache.go:56] Caching tarball of preloaded images
	I0127 12:35:43.175957  534894 preload.go:172] Found /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:35:43.175970  534894 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 12:35:43.176077  534894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/config.json ...
	I0127 12:35:43.176271  534894 start.go:360] acquireMachinesLock for newest-cni-610630: {Name:mk818835aef0de701295cc2c98fea95e1be33202 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:35:43.176324  534894 start.go:364] duration metric: took 32.573µs to acquireMachinesLock for "newest-cni-610630"
	I0127 12:35:43.176345  534894 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:35:43.176356  534894 fix.go:54] fixHost starting: 
	I0127 12:35:43.176686  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:35:43.176750  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:35:43.191549  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37401
	I0127 12:35:43.191935  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:35:43.192419  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:35:43.192448  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:35:43.192934  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:35:43.193138  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:35:43.193300  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:35:43.195116  534894 fix.go:112] recreateIfNeeded on newest-cni-610630: state=Stopped err=<nil>
	I0127 12:35:43.195141  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	W0127 12:35:43.195320  534894 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:35:43.196456  534894 out.go:177] * Restarting existing kvm2 VM for "newest-cni-610630" ...
	I0127 12:35:43.169652  532344 addons.go:514] duration metric: took 2.587685868s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 12:35:45.359702  532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:42.352585  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:44.353035  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:46.353087  532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:44.707430  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:46.708896  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:43.197457  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Start
	I0127 12:35:43.197621  534894 main.go:141] libmachine: (newest-cni-610630) starting domain...
	I0127 12:35:43.197646  534894 main.go:141] libmachine: (newest-cni-610630) ensuring networks are active...
	I0127 12:35:43.198412  534894 main.go:141] libmachine: (newest-cni-610630) Ensuring network default is active
	I0127 12:35:43.198762  534894 main.go:141] libmachine: (newest-cni-610630) Ensuring network mk-newest-cni-610630 is active
	I0127 12:35:43.199182  534894 main.go:141] libmachine: (newest-cni-610630) getting domain XML...
	I0127 12:35:43.199981  534894 main.go:141] libmachine: (newest-cni-610630) creating domain...
	I0127 12:35:44.514338  534894 main.go:141] libmachine: (newest-cni-610630) waiting for IP...
	I0127 12:35:44.515307  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:44.515803  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:44.515875  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:44.515771  534929 retry.go:31] will retry after 248.83242ms: waiting for domain to come up
	I0127 12:35:44.766511  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:44.767046  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:44.767081  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:44.767011  534929 retry.go:31] will retry after 381.268975ms: waiting for domain to come up
	I0127 12:35:45.149680  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:45.150281  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:45.150314  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:45.150226  534929 retry.go:31] will retry after 435.74049ms: waiting for domain to come up
	I0127 12:35:45.587978  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:45.588682  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:45.588719  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:45.588634  534929 retry.go:31] will retry after 577.775914ms: waiting for domain to come up
	I0127 12:35:46.168596  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:46.169297  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:46.169332  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:46.169238  534929 retry.go:31] will retry after 539.718923ms: waiting for domain to come up
	I0127 12:35:46.711082  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:46.711652  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:46.711676  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:46.711635  534929 retry.go:31] will retry after 607.430128ms: waiting for domain to come up
	I0127 12:35:47.320403  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:47.320941  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:47.321006  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:47.320921  534929 retry.go:31] will retry after 772.973348ms: waiting for domain to come up
	I0127 12:35:46.359497  532344 pod_ready.go:93] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:46.359531  532344 pod_ready.go:82] duration metric: took 5.506181911s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:46.359547  532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.867744  532344 pod_ready.go:93] pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.867773  532344 pod_ready.go:82] duration metric: took 1.508215371s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.867785  532344 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.872748  532344 pod_ready.go:93] pod "etcd-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.872769  532344 pod_ready.go:82] duration metric: took 4.975217ms for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.872782  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.879135  532344 pod_ready.go:93] pod "kube-apiserver-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.879153  532344 pod_ready.go:82] duration metric: took 6.364009ms for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.879170  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.884792  532344 pod_ready.go:93] pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.884809  532344 pod_ready.go:82] duration metric: took 5.632068ms for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.884817  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.957535  532344 pod_ready.go:93] pod "kube-proxy-bbnm2" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:47.957564  532344 pod_ready.go:82] duration metric: took 72.739132ms for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:47.957577  532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:48.358062  532344 pod_ready.go:93] pod "kube-scheduler-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:48.358087  532344 pod_ready.go:82] duration metric: took 400.502078ms for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:48.358095  532344 pod_ready.go:39] duration metric: took 7.515367235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:48.358124  532344 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:35:48.358180  532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:48.381657  532344 api_server.go:72] duration metric: took 7.799751759s to wait for apiserver process to appear ...
	I0127 12:35:48.381684  532344 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:35:48.381704  532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0127 12:35:48.387590  532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 200:
	ok
	I0127 12:35:48.388765  532344 api_server.go:141] control plane version: v1.32.1
	I0127 12:35:48.388787  532344 api_server.go:131] duration metric: took 7.09706ms to wait for apiserver health ...
	I0127 12:35:48.388795  532344 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:35:48.560605  532344 system_pods.go:59] 9 kube-system pods found
	I0127 12:35:48.560642  532344 system_pods.go:61] "coredns-668d6bf9bc-v9stn" [011e6981-39d0-4fa1-bf1b-3d1e06c7c71a] Running
	I0127 12:35:48.560650  532344 system_pods.go:61] "coredns-668d6bf9bc-wwb9p" [0a034560-980a-40fb-9603-be18d02b6f05] Running
	I0127 12:35:48.560656  532344 system_pods.go:61] "etcd-no-preload-215237" [8b9ab7f2-224f-4373-9dc2-fa794a60d922] Running
	I0127 12:35:48.560659  532344 system_pods.go:61] "kube-apiserver-no-preload-215237" [064e0d8e-5d82-42bb-979d-cd0e9aa13f56] Running
	I0127 12:35:48.560663  532344 system_pods.go:61] "kube-controller-manager-no-preload-215237" [dd9c190f-c01e-4fa7-b033-57463b032d30] Running
	I0127 12:35:48.560666  532344 system_pods.go:61] "kube-proxy-bbnm2" [dd89ae69-6ad2-44cb-9c80-ba5529e22dc1] Running
	I0127 12:35:48.560671  532344 system_pods.go:61] "kube-scheduler-no-preload-215237" [41c25fba-7af8-4e0e-b96d-57be786d703c] Running
	I0127 12:35:48.560680  532344 system_pods.go:61] "metrics-server-f79f97bbb-lqck5" [3447c2da-cbb0-412c-a8d9-2be32c8e6dad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:35:48.560686  532344 system_pods.go:61] "storage-provisioner" [9627d136-2ecb-4cc3-969d-b62de2261147] Running
	I0127 12:35:48.560696  532344 system_pods.go:74] duration metric: took 171.894881ms to wait for pod list to return data ...
	I0127 12:35:48.560709  532344 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:35:48.760164  532344 default_sa.go:45] found service account: "default"
	I0127 12:35:48.760270  532344 default_sa.go:55] duration metric: took 199.548191ms for default service account to be created ...
	I0127 12:35:48.760295  532344 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:35:48.961828  532344 system_pods.go:87] 9 kube-system pods found
	I0127 12:35:48.846560  532607 pod_ready.go:82] duration metric: took 4m0.000837349s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:48.846588  532607 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 12:35:48.846609  532607 pod_ready.go:39] duration metric: took 4m15.043496386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:48.846642  532607 kubeadm.go:597] duration metric: took 4m22.373102966s to restartPrimaryControlPlane
	W0127 12:35:48.846704  532607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:35:48.846732  532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 12:35:51.040149  532607 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.193395005s)
	I0127 12:35:51.040242  532607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:35:51.059048  532607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:35:51.071298  532607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:35:51.083050  532607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:35:51.083071  532607 kubeadm.go:157] found existing configuration files:
	
	I0127 12:35:51.083125  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:35:51.095124  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:35:51.095208  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:35:51.109222  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:35:51.120314  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:35:51.120390  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:35:51.129841  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:35:51.138490  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:35:51.138545  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:35:51.148658  532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:35:51.157842  532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:35:51.157894  532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:35:51.167146  532607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:35:51.220576  532607 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:35:51.220796  532607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:35:51.342653  532607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:35:51.342830  532607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:35:51.343020  532607 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:35:51.348865  532607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:35:51.351235  532607 out.go:235]   - Generating certificates and keys ...
	I0127 12:35:51.351355  532607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:35:51.351445  532607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:35:51.351549  532607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:35:51.351635  532607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:35:51.351728  532607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:35:51.351801  532607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:35:51.351908  532607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:35:51.352000  532607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:35:51.352111  532607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:35:51.352262  532607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:35:51.352422  532607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:35:51.352546  532607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:35:51.416524  532607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:35:51.666997  532607 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:35:51.867237  532607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:35:52.007584  532607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:35:52.100986  532607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:35:52.101889  532607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:35:52.105806  532607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:35:52.107605  532607 out.go:235]   - Booting up control plane ...
	I0127 12:35:52.107745  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:35:52.108083  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:35:52.109913  532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:35:52.146307  532607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:35:52.156130  532607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:35:52.156211  532607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:35:52.316523  532607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:35:52.316653  532607 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:35:48.711637  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:51.208760  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:48.096119  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:48.096791  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:48.096823  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:48.096728  534929 retry.go:31] will retry after 1.301268199s: waiting for domain to come up
	I0127 12:35:49.400077  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:49.400697  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:49.400729  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:49.400664  534929 retry.go:31] will retry after 1.62599798s: waiting for domain to come up
	I0127 12:35:51.029156  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:51.029715  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:51.029746  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:51.029706  534929 retry.go:31] will retry after 1.477748588s: waiting for domain to come up
	I0127 12:35:52.509484  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:52.510252  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:52.510299  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:52.510150  534929 retry.go:31] will retry after 1.875473187s: waiting for domain to come up
	I0127 12:35:53.322303  532607 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005635238s
	I0127 12:35:53.322436  532607 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:35:53.708069  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:56.209743  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:54.387170  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:54.387808  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:54.387840  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:54.387764  534929 retry.go:31] will retry after 2.219284161s: waiting for domain to come up
	I0127 12:35:56.609666  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:56.610140  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:56.610163  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:56.610112  534929 retry.go:31] will retry after 3.124115638s: waiting for domain to come up
	I0127 12:35:58.324673  532607 kubeadm.go:310] [api-check] The API server is healthy after 5.002577765s
	I0127 12:35:58.341207  532607 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:35:58.354763  532607 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:35:58.376218  532607 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:35:58.376468  532607 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-346100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:35:58.389424  532607 kubeadm.go:310] [bootstrap-token] Using token: 5069a0.5f3g1pdxhpmrcoga
	I0127 12:35:58.390773  532607 out.go:235]   - Configuring RBAC rules ...
	I0127 12:35:58.390901  532607 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:35:58.397069  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:35:58.405069  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:35:58.409291  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:35:58.412914  532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:35:58.415499  532607 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:35:58.732028  532607 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:35:59.154936  532607 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:35:59.732670  532607 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:35:59.734653  532607 kubeadm.go:310] 
	I0127 12:35:59.734754  532607 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:35:59.734788  532607 kubeadm.go:310] 
	I0127 12:35:59.734919  532607 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:35:59.734933  532607 kubeadm.go:310] 
	I0127 12:35:59.734978  532607 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:35:59.735094  532607 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:35:59.735193  532607 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:35:59.735206  532607 kubeadm.go:310] 
	I0127 12:35:59.735295  532607 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:35:59.735316  532607 kubeadm.go:310] 
	I0127 12:35:59.735384  532607 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:35:59.735392  532607 kubeadm.go:310] 
	I0127 12:35:59.735463  532607 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:35:59.735570  532607 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:35:59.735692  532607 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:35:59.735707  532607 kubeadm.go:310] 
	I0127 12:35:59.735853  532607 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:35:59.735964  532607 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:35:59.735986  532607 kubeadm.go:310] 
	I0127 12:35:59.736104  532607 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5069a0.5f3g1pdxhpmrcoga \
	I0127 12:35:59.736265  532607 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
	I0127 12:35:59.736299  532607 kubeadm.go:310] 	--control-plane 
	I0127 12:35:59.736312  532607 kubeadm.go:310] 
	I0127 12:35:59.736432  532607 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:35:59.736441  532607 kubeadm.go:310] 
	I0127 12:35:59.736583  532607 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5069a0.5f3g1pdxhpmrcoga \
	I0127 12:35:59.736761  532607 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 
	I0127 12:35:59.738118  532607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:35:59.738152  532607 cni.go:84] Creating CNI manager for ""
	I0127 12:35:59.738162  532607 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:35:59.739901  532607 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:35:59.741063  532607 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:35:59.759536  532607 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:35:59.777178  532607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:35:59.777199  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:59.777236  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-346100 minikube.k8s.io/updated_at=2025_01_27T12_35_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=embed-certs-346100 minikube.k8s.io/primary=true
	I0127 12:35:59.974092  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:59.974117  532607 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:00.474716  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:00.974693  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:01.474216  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:01.974205  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:35:58.707466  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:01.206257  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:35:59.736004  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:35:59.736626  534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
	I0127 12:35:59.736649  534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:59.736597  534929 retry.go:31] will retry after 3.849528984s: waiting for domain to come up
	I0127 12:36:02.475052  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:02.975120  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:03.474457  532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:03.577041  532607 kubeadm.go:1113] duration metric: took 3.799909499s to wait for elevateKubeSystemPrivileges
	I0127 12:36:03.577092  532607 kubeadm.go:394] duration metric: took 4m37.171719699s to StartCluster
	I0127 12:36:03.577128  532607 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:03.577224  532607 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:03.579144  532607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:03.579423  532607 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.206 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:03.579505  532607 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:03.579620  532607 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-346100"
	I0127 12:36:03.579641  532607 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-346100"
	W0127 12:36:03.579650  532607 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:03.579651  532607 addons.go:69] Setting default-storageclass=true in profile "embed-certs-346100"
	I0127 12:36:03.579676  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.579688  532607 config.go:182] Loaded profile config "embed-certs-346100": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:03.579700  532607 addons.go:69] Setting dashboard=true in profile "embed-certs-346100"
	I0127 12:36:03.579723  532607 addons.go:238] Setting addon dashboard=true in "embed-certs-346100"
	I0127 12:36:03.579715  532607 addons.go:69] Setting metrics-server=true in profile "embed-certs-346100"
	W0127 12:36:03.579740  532607 addons.go:247] addon dashboard should already be in state true
	I0127 12:36:03.579694  532607 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-346100"
	I0127 12:36:03.579749  532607 addons.go:238] Setting addon metrics-server=true in "embed-certs-346100"
	W0127 12:36:03.579764  532607 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:03.579779  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.579800  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.580054  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580088  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580101  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580150  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580190  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580215  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.580233  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.580258  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.581024  532607 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:03.582429  532607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:03.598339  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0127 12:36:03.598375  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 12:36:03.598838  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.598892  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34331
	I0127 12:36:03.598919  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.599306  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.599470  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599486  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599497  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599511  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599722  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.599738  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.599912  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.599974  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.600223  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.600494  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600530  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600545  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600578  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600674  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.600699  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.600881  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0127 12:36:03.601524  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.602100  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.602116  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.602471  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.602687  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.606648  532607 addons.go:238] Setting addon default-storageclass=true in "embed-certs-346100"
	W0127 12:36:03.606677  532607 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:03.606709  532607 host.go:66] Checking if "embed-certs-346100" exists ...
	I0127 12:36:03.607067  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.607104  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.619967  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0127 12:36:03.620348  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33043
	I0127 12:36:03.620623  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.620935  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.621427  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.621447  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.621789  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.621804  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.621998  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.622221  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.622273  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.622543  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.624486  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.624677  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.625420  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43503
	I0127 12:36:03.626112  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.626167  532607 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:03.626180  532607 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:03.626583  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.626602  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.626611  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0127 12:36:03.626942  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.627027  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.627437  532607 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:03.627453  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:03.627464  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.627467  532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:03.627475  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.627504  532607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:03.627471  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.627836  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.628149  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.628561  532607 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:03.629535  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:03.629551  532607 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:03.629575  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.630434  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.631724  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.632213  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.632232  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.632448  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.632593  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.632682  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.632867  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.632996  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.633161  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.633189  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.633418  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.633573  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.633701  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.633812  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.634247  532607 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:03.635266  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:03.635284  532607 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:03.635305  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.637878  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.638306  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.638338  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.638542  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.638697  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.638867  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.639116  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.643537  532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
	I0127 12:36:03.643881  532607 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:03.644309  532607 main.go:141] libmachine: Using API Version  1
	I0127 12:36:03.644327  532607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:03.644644  532607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:03.644952  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
	I0127 12:36:03.646128  532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
	I0127 12:36:03.646325  532607 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:03.646341  532607 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:03.646358  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
	I0127 12:36:03.649282  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.649641  532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
	I0127 12:36:03.649669  532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
	I0127 12:36:03.649910  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
	I0127 12:36:03.650077  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
	I0127 12:36:03.650198  532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
	I0127 12:36:03.650298  532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
	I0127 12:36:03.805663  532607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:03.824512  532607 node_ready.go:35] waiting up to 6m0s for node "embed-certs-346100" to be "Ready" ...
	I0127 12:36:03.856505  532607 node_ready.go:49] node "embed-certs-346100" has status "Ready":"True"
	I0127 12:36:03.856540  532607 node_ready.go:38] duration metric: took 31.977019ms for node "embed-certs-346100" to be "Ready" ...
	I0127 12:36:03.856555  532607 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:03.863683  532607 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:03.902624  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:03.925389  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:03.977654  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:03.977686  532607 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:04.012033  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:04.012063  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:04.029962  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:04.029991  532607 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:04.076532  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:04.076565  532607 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:04.136201  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:04.136229  532607 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:04.142268  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:04.142293  532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:04.174895  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:04.174919  532607 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:04.185938  532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:04.185959  532607 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:04.204606  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:04.226546  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:04.226574  532607 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:04.340411  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:04.340438  532607 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:04.424847  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.424878  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.425230  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:04.425269  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.425293  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.425304  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.425329  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.425596  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.425613  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.425627  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:04.443059  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:04.443080  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:04.443380  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:04.443404  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:04.457532  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:04.457557  532607 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:04.529771  532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:04.529803  532607 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:04.581907  532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:05.466462  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541011177s)
	I0127 12:36:05.466526  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.466544  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.466865  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.466934  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.466947  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.466957  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.466969  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.467283  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.467328  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.467300  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.677171  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.472522816s)
	I0127 12:36:05.677230  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.677244  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.677645  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.677684  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.677699  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.677711  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:05.677723  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:05.678056  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:05.678091  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:05.678115  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:05.678132  532607 addons.go:479] Verifying addon metrics-server=true in "embed-certs-346100"
	I0127 12:36:05.870203  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:06.503934  532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.921960102s)
	I0127 12:36:06.504007  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:06.504025  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:06.504372  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:06.504489  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:06.504506  532607 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:06.504514  532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
	I0127 12:36:06.504460  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:06.504814  532607 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:06.504834  532607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:06.504835  532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
	I0127 12:36:06.506475  532607 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-346100 addons enable metrics-server
	
	I0127 12:36:06.507672  532607 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 12:36:06.508878  532607 addons.go:514] duration metric: took 2.929397312s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 12:36:03.587872  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.588437  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has current primary IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.588458  534894 main.go:141] libmachine: (newest-cni-610630) found domain IP: 192.168.39.228
	I0127 12:36:03.588471  534894 main.go:141] libmachine: (newest-cni-610630) reserving static IP address...
	I0127 12:36:03.589076  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "newest-cni-610630", mac: "52:54:00:49:61:34", ip: "192.168.39.228"} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.589105  534894 main.go:141] libmachine: (newest-cni-610630) reserved static IP address 192.168.39.228 for domain newest-cni-610630
	I0127 12:36:03.589131  534894 main.go:141] libmachine: (newest-cni-610630) DBG | skip adding static IP to network mk-newest-cni-610630 - found existing host DHCP lease matching {name: "newest-cni-610630", mac: "52:54:00:49:61:34", ip: "192.168.39.228"}
	I0127 12:36:03.589141  534894 main.go:141] libmachine: (newest-cni-610630) waiting for SSH...
	I0127 12:36:03.589165  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Getting to WaitForSSH function...
	I0127 12:36:03.592182  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.592771  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.592796  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.593171  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Using SSH client type: external
	I0127 12:36:03.593190  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa (-rw-------)
	I0127 12:36:03.593218  534894 main.go:141] libmachine: (newest-cni-610630) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:36:03.593228  534894 main.go:141] libmachine: (newest-cni-610630) DBG | About to run SSH command:
	I0127 12:36:03.593239  534894 main.go:141] libmachine: (newest-cni-610630) DBG | exit 0
	I0127 12:36:03.733183  534894 main.go:141] libmachine: (newest-cni-610630) DBG | SSH cmd err, output: <nil>: 
	I0127 12:36:03.733566  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetConfigRaw
	I0127 12:36:03.734338  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:03.737083  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.737511  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.737553  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.737875  534894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/config.json ...
	I0127 12:36:03.738075  534894 machine.go:93] provisionDockerMachine start ...
	I0127 12:36:03.738099  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:03.738370  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:03.741025  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.741354  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.741384  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.741566  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:03.741756  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.741966  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.742141  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:03.742356  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:03.742588  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:03.742604  534894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:36:03.853610  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:36:03.853641  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
	I0127 12:36:03.853921  534894 buildroot.go:166] provisioning hostname "newest-cni-610630"
	I0127 12:36:03.853957  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
	I0127 12:36:03.854185  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:03.857441  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.857928  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.857961  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.858074  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:03.858293  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.858504  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.858678  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:03.858886  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:03.859093  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:03.859120  534894 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-610630 && echo "newest-cni-610630" | sudo tee /etc/hostname
	I0127 12:36:03.986908  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-610630
	
	I0127 12:36:03.986946  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:03.990070  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.990587  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:03.990628  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:03.990879  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:03.991122  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.991299  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:03.991452  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:03.991678  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:03.991897  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:03.991926  534894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-610630' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-610630/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-610630' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:36:04.113288  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:36:04.113333  534894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-471120/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-471120/.minikube}
	I0127 12:36:04.113360  534894 buildroot.go:174] setting up certificates
	I0127 12:36:04.113382  534894 provision.go:84] configureAuth start
	I0127 12:36:04.113398  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
	I0127 12:36:04.113676  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:04.116365  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.116714  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.116764  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.117068  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.119378  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.119713  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.119736  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.119918  534894 provision.go:143] copyHostCerts
	I0127 12:36:04.119990  534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem, removing ...
	I0127 12:36:04.120016  534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem
	I0127 12:36:04.120102  534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem (1082 bytes)
	I0127 12:36:04.120256  534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem, removing ...
	I0127 12:36:04.120274  534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem
	I0127 12:36:04.120316  534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem (1123 bytes)
	I0127 12:36:04.120402  534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem, removing ...
	I0127 12:36:04.120415  534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem
	I0127 12:36:04.120457  534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem (1679 bytes)
	I0127 12:36:04.120535  534894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem org=jenkins.newest-cni-610630 san=[127.0.0.1 192.168.39.228 localhost minikube newest-cni-610630]
	I0127 12:36:04.308578  534894 provision.go:177] copyRemoteCerts
	I0127 12:36:04.308646  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:36:04.308681  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.311740  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.312147  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.312181  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.312367  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.312539  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.312718  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.312951  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.406421  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:36:04.434493  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:36:04.458820  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 12:36:04.483270  534894 provision.go:87] duration metric: took 369.872198ms to configureAuth
	I0127 12:36:04.483307  534894 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:36:04.483583  534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:04.483608  534894 machine.go:96] duration metric: took 745.518388ms to provisionDockerMachine
	I0127 12:36:04.483622  534894 start.go:293] postStartSetup for "newest-cni-610630" (driver="kvm2")
	I0127 12:36:04.483638  534894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:36:04.483676  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.484046  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:36:04.484091  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.487237  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.487689  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.487724  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.487930  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.488140  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.488365  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.488527  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.578283  534894 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:36:04.583274  534894 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:36:04.583302  534894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/addons for local assets ...
	I0127 12:36:04.583381  534894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/files for local assets ...
	I0127 12:36:04.583480  534894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem -> 4783872.pem in /etc/ssl/certs
	I0127 12:36:04.583597  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:36:04.594213  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:36:04.618506  534894 start.go:296] duration metric: took 134.861455ms for postStartSetup
	I0127 12:36:04.618569  534894 fix.go:56] duration metric: took 21.442212309s for fixHost
	I0127 12:36:04.618601  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.621910  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.622352  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.622388  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.622670  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.622872  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.623064  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.623231  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.623434  534894 main.go:141] libmachine: Using SSH client type: native
	I0127 12:36:04.623683  534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 12:36:04.623701  534894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:36:04.745637  534894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981364.720376969
	
	I0127 12:36:04.745668  534894 fix.go:216] guest clock: 1737981364.720376969
	I0127 12:36:04.745677  534894 fix.go:229] Guest: 2025-01-27 12:36:04.720376969 +0000 UTC Remote: 2025-01-27 12:36:04.618576525 +0000 UTC m=+21.609424923 (delta=101.800444ms)
	I0127 12:36:04.745704  534894 fix.go:200] guest clock delta is within tolerance: 101.800444ms
	I0127 12:36:04.745711  534894 start.go:83] releasing machines lock for "newest-cni-610630", held for 21.569374077s
	I0127 12:36:04.745742  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.746064  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:04.749116  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.749586  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.749623  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.749762  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.750369  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.750591  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:04.750714  534894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:36:04.750788  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.750841  534894 ssh_runner.go:195] Run: cat /version.json
	I0127 12:36:04.750872  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:04.753604  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.753937  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.753995  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.754036  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.754117  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.754283  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.754435  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:04.754463  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:04.754505  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.754649  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:04.754824  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:04.754704  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.754972  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:04.755165  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:04.837766  534894 ssh_runner.go:195] Run: systemctl --version
	I0127 12:36:04.870922  534894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:36:04.877067  534894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:36:04.877148  534894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:36:04.898288  534894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:36:04.898318  534894 start.go:495] detecting cgroup driver to use...
	I0127 12:36:04.898407  534894 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:36:04.932879  534894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:36:04.949987  534894 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:36:04.950133  534894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:36:04.967044  534894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:36:04.983091  534894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:36:05.124492  534894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:36:05.268901  534894 docker.go:233] disabling docker service ...
	I0127 12:36:05.268987  534894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:36:05.284320  534894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:36:05.298992  534894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:36:05.441228  534894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:36:05.609452  534894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:36:05.626916  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:36:05.647205  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:36:05.657704  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:36:05.667476  534894 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:36:05.667555  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:36:05.677468  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:36:05.688601  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:36:05.698702  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:36:05.710663  534894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:36:05.724221  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:36:05.737093  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:36:05.746742  534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:36:05.756481  534894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:36:05.767282  534894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:36:05.767344  534894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:36:05.780026  534894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:36:05.791098  534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:05.930676  534894 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:36:05.966221  534894 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 12:36:05.966321  534894 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:36:05.971094  534894 retry.go:31] will retry after 1.421722911s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0127 12:36:07.393037  534894 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:36:07.398456  534894 start.go:563] Will wait 60s for crictl version
	I0127 12:36:07.398530  534894 ssh_runner.go:195] Run: which crictl
	I0127 12:36:07.402351  534894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:36:07.446224  534894 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0127 12:36:07.446301  534894 ssh_runner.go:195] Run: containerd --version
	I0127 12:36:07.473080  534894 ssh_runner.go:195] Run: containerd --version
	I0127 12:36:07.497663  534894 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0127 12:36:07.498857  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
	I0127 12:36:07.501622  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:07.502032  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:07.502071  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:07.502274  534894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 12:36:07.506028  534894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:36:07.519964  534894 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 12:36:03.206663  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:05.207472  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:07.706605  532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:07.521255  534894 kubeadm.go:883] updating cluster {Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:36:07.521413  534894 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:36:07.521493  534894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:36:07.554098  534894 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:36:07.554125  534894 containerd.go:534] Images already preloaded, skipping extraction
	I0127 12:36:07.554187  534894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:36:07.591861  534894 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:36:07.591890  534894 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:36:07.591901  534894 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.32.1 containerd true true} ...
	I0127 12:36:07.592033  534894 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-610630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:36:07.592107  534894 ssh_runner.go:195] Run: sudo crictl info
	I0127 12:36:07.633013  534894 cni.go:84] Creating CNI manager for ""
	I0127 12:36:07.633040  534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:36:07.633051  534894 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 12:36:07.633082  534894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-610630 NodeName:newest-cni-610630 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:36:07.633263  534894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-610630"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.228"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:36:07.633336  534894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:36:07.643906  534894 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:36:07.643972  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:36:07.653399  534894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0127 12:36:07.671016  534894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:36:07.691229  534894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2311 bytes)
	I0127 12:36:07.711891  534894 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I0127 12:36:07.716614  534894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:36:07.730520  534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:07.852685  534894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:07.870469  534894 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630 for IP: 192.168.39.228
	I0127 12:36:07.870498  534894 certs.go:194] generating shared ca certs ...
	I0127 12:36:07.870523  534894 certs.go:226] acquiring lock for ca certs: {Name:mk02d117412837bd489768267e2b174e6c3ff6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:07.870697  534894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key
	I0127 12:36:07.870773  534894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key
	I0127 12:36:07.870785  534894 certs.go:256] generating profile certs ...
	I0127 12:36:07.870943  534894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/client.key
	I0127 12:36:07.871073  534894 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.key.2ce4e80e
	I0127 12:36:07.871140  534894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.key
	I0127 12:36:07.871291  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem (1338 bytes)
	W0127 12:36:07.871334  534894 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387_empty.pem, impossibly tiny 0 bytes
	I0127 12:36:07.871349  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:36:07.871394  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:36:07.871429  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:36:07.871461  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem (1679 bytes)
	I0127 12:36:07.871519  534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem (1708 bytes)
	I0127 12:36:07.872415  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:36:07.904294  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:36:07.944289  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:36:07.979498  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:36:08.010836  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 12:36:08.041389  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:36:08.201622  532844 pod_ready.go:82] duration metric: took 4m0.001032286s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" ...
	E0127 12:36:08.201658  532844 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 12:36:08.201683  532844 pod_ready.go:39] duration metric: took 4m14.040174083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:08.201724  532844 kubeadm.go:597] duration metric: took 4m21.555444284s to restartPrimaryControlPlane
	W0127 12:36:08.201798  532844 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 12:36:08.201833  532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 12:36:10.133466  532844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.93160232s)
	I0127 12:36:10.133550  532844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:36:10.155296  532844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:36:10.170023  532844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:36:10.183165  532844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:36:10.183194  532844 kubeadm.go:157] found existing configuration files:
	
	I0127 12:36:10.183257  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 12:36:10.195175  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:36:10.195253  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:36:10.208349  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 12:36:10.220351  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:36:10.220429  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:36:10.238914  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 12:36:10.254995  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:36:10.255067  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:36:10.266753  532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 12:36:10.278422  532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:36:10.278490  532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:36:10.292279  532844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:36:10.351007  532844 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:36:10.351189  532844 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:36:10.469769  532844 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:36:10.469949  532844 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:36:10.470056  532844 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:36:10.479353  532844 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:36:10.481858  532844 out.go:235]   - Generating certificates and keys ...
	I0127 12:36:10.481959  532844 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:36:10.482038  532844 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:36:10.482135  532844 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:36:10.482236  532844 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 12:36:10.482358  532844 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:36:10.482442  532844 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 12:36:10.482525  532844 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 12:36:10.482633  532844 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:36:10.483039  532844 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:36:10.483619  532844 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:36:10.483746  532844 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 12:36:10.483829  532844 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:36:10.585561  532844 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:36:10.784195  532844 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:36:10.958020  532844 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:36:11.223196  532844 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:36:11.439416  532844 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:36:11.440271  532844 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:36:11.444236  532844 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:36:08.374973  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:10.872073  532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:11.445766  532844 out.go:235]   - Booting up control plane ...
	I0127 12:36:11.445895  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:36:11.445993  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:36:11.447764  532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:36:11.484418  532844 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:36:11.496508  532844 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:36:11.496594  532844 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:36:11.681886  532844 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:36:11.682039  532844 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:36:12.183183  532844 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.076889ms
	I0127 12:36:12.183305  532844 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:36:08.074441  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:36:08.107699  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:36:08.137950  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem --> /usr/share/ca-certificates/478387.pem (1338 bytes)
	I0127 12:36:08.163896  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /usr/share/ca-certificates/4783872.pem (1708 bytes)
	I0127 12:36:08.188493  534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:36:08.217196  534894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:36:08.237633  534894 ssh_runner.go:195] Run: openssl version
	I0127 12:36:08.244270  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478387.pem && ln -fs /usr/share/ca-certificates/478387.pem /etc/ssl/certs/478387.pem"
	I0127 12:36:08.258544  534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478387.pem
	I0127 12:36:08.264117  534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:28 /usr/share/ca-certificates/478387.pem
	I0127 12:36:08.264194  534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478387.pem
	I0127 12:36:08.271823  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478387.pem /etc/ssl/certs/51391683.0"
	I0127 12:36:08.283160  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783872.pem && ln -fs /usr/share/ca-certificates/4783872.pem /etc/ssl/certs/4783872.pem"
	I0127 12:36:08.293600  534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783872.pem
	I0127 12:36:08.299046  534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:28 /usr/share/ca-certificates/4783872.pem
	I0127 12:36:08.299115  534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783872.pem
	I0127 12:36:08.306015  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783872.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:36:08.317692  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:36:08.328317  534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:36:08.332856  534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:36:08.332912  534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:36:08.342875  534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:36:08.355240  534894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:36:08.363234  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:36:08.369655  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:36:08.377149  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:36:08.382739  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:36:08.388277  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:36:08.395644  534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:36:08.403226  534894 kubeadm.go:392] StartCluster: {Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:36:08.403325  534894 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 12:36:08.403369  534894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:36:08.454071  534894 cri.go:89] found id: "05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2"
	I0127 12:36:08.454100  534894 cri.go:89] found id: "a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250"
	I0127 12:36:08.454104  534894 cri.go:89] found id: "357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271"
	I0127 12:36:08.454108  534894 cri.go:89] found id: "8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b"
	I0127 12:36:08.454118  534894 cri.go:89] found id: "631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a"
	I0127 12:36:08.454123  534894 cri.go:89] found id: "3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13"
	I0127 12:36:08.454127  534894 cri.go:89] found id: "ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76"
	I0127 12:36:08.454130  534894 cri.go:89] found id: "0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c"
	I0127 12:36:08.454134  534894 cri.go:89] found id: ""
	I0127 12:36:08.454198  534894 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 12:36:08.472428  534894 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T12:36:08Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 12:36:08.472525  534894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:36:08.484156  534894 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:36:08.484183  534894 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:36:08.484255  534894 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:36:08.494975  534894 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:36:08.496360  534894 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-610630" does not appear in /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:08.497417  534894 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-471120/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-610630" cluster setting kubeconfig missing "newest-cni-610630" context setting]
	I0127 12:36:08.498843  534894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:08.501415  534894 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:36:08.513111  534894 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.228
	I0127 12:36:08.513147  534894 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:36:08.513163  534894 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0127 12:36:08.513216  534894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:36:08.561176  534894 cri.go:89] found id: "05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2"
	I0127 12:36:08.561203  534894 cri.go:89] found id: "a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250"
	I0127 12:36:08.561209  534894 cri.go:89] found id: "357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271"
	I0127 12:36:08.561214  534894 cri.go:89] found id: "8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b"
	I0127 12:36:08.561218  534894 cri.go:89] found id: "631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a"
	I0127 12:36:08.561223  534894 cri.go:89] found id: "3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13"
	I0127 12:36:08.561227  534894 cri.go:89] found id: "ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76"
	I0127 12:36:08.561231  534894 cri.go:89] found id: "0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c"
	I0127 12:36:08.561235  534894 cri.go:89] found id: ""
	I0127 12:36:08.561242  534894 cri.go:252] Stopping containers: [05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2 a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250 357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271 8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b 631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a 3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13 ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76 0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c]
	I0127 12:36:08.561301  534894 ssh_runner.go:195] Run: which crictl
	I0127 12:36:08.565588  534894 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2 a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250 357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271 8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b 631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a 3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13 ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76 0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c
	I0127 12:36:08.619372  534894 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:36:08.636553  534894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:36:08.648359  534894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:36:08.648385  534894 kubeadm.go:157] found existing configuration files:
	
	I0127 12:36:08.648439  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:36:08.659186  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:36:08.659257  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:36:08.668828  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:36:08.679551  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:36:08.679624  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:36:08.689530  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:36:08.701111  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:36:08.701164  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:36:08.709830  534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:36:08.718407  534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:36:08.718495  534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:36:08.727400  534894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:36:08.736296  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:08.887779  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:09.818917  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:10.080535  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:10.159744  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:10.232154  534894 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:10.232252  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:10.732454  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:11.233357  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:11.264081  534894 api_server.go:72] duration metric: took 1.031921463s to wait for apiserver process to appear ...
	I0127 12:36:11.264115  534894 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:11.264142  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:11.264724  534894 api_server.go:269] stopped: https://192.168.39.228:8443/healthz: Get "https://192.168.39.228:8443/healthz": dial tcp 192.168.39.228:8443: connect: connection refused
	I0127 12:36:11.764442  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:14.358365  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:36:14.358472  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:36:14.358502  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:14.408913  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:36:14.409034  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:36:14.764463  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:14.771512  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:36:14.771584  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:36:15.264813  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:15.270318  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:36:15.270344  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:36:15.765063  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:15.772704  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:36:15.772774  534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:36:16.264285  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:16.271130  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0127 12:36:16.281041  534894 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:16.281071  534894 api_server.go:131] duration metric: took 5.016947638s to wait for apiserver health ...
	I0127 12:36:16.281087  534894 cni.go:84] Creating CNI manager for ""
	I0127 12:36:16.281096  534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:36:16.282806  534894 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:36:16.284232  534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:36:16.297533  534894 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:36:16.314501  534894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:16.324319  534894 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:16.324349  534894 system_pods.go:61] "coredns-668d6bf9bc-n6hwn" [24d3582e-97d0-4bb8-b12a-6f69ecd72309] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:16.324357  534894 system_pods.go:61] "coredns-668d6bf9bc-vg4bb" [3d20d4a5-8ddf-4166-af63-47beab76d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:16.324365  534894 system_pods.go:61] "etcd-newest-cni-610630" [0a812f8b-1e38-49a2-be17-9db3fb1979db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:36:16.324379  534894 system_pods.go:61] "kube-apiserver-newest-cni-610630" [292fb9b9-ccfb-49b5-892d-078e5897981d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:36:16.324385  534894 system_pods.go:61] "kube-controller-manager-newest-cni-610630" [36d1c2e6-42ce-4d4a-8bba-ff7beb6551ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:36:16.324391  534894 system_pods.go:61] "kube-proxy-8szpt" [11487c0b-6afa-4c83-9eb8-6a9f609f7b58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 12:36:16.324395  534894 system_pods.go:61] "kube-scheduler-newest-cni-610630" [2f8744a9-54dc-4b5c-92f6-b0d3b9b0de7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:36:16.324400  534894 system_pods.go:61] "metrics-server-f79f97bbb-kcc5g" [6593df15-330d-4389-b878-45d396d718b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:16.324408  534894 system_pods.go:61] "storage-provisioner" [3cc7604e-fcbb-48e5-8445-82d5150b759f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:36:16.324413  534894 system_pods.go:74] duration metric: took 9.892595ms to wait for pod list to return data ...
	I0127 12:36:16.324424  534894 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:36:16.327339  534894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:36:16.327364  534894 node_conditions.go:123] node cpu capacity is 2
	I0127 12:36:16.327385  534894 node_conditions.go:105] duration metric: took 2.956884ms to run NodePressure ...
	I0127 12:36:16.327404  534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:36:16.991253  534894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:36:17.011999  534894 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:17.012027  534894 kubeadm.go:597] duration metric: took 8.527837095s to restartPrimaryControlPlane
	I0127 12:36:17.012040  534894 kubeadm.go:394] duration metric: took 8.608822701s to StartCluster
	I0127 12:36:17.012072  534894 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:17.012204  534894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:17.014682  534894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:17.015030  534894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:17.015158  534894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:17.015477  534894 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-610630"
	I0127 12:36:17.015505  534894 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-610630"
	I0127 12:36:17.015320  534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:17.015542  534894 addons.go:69] Setting metrics-server=true in profile "newest-cni-610630"
	I0127 12:36:17.015555  534894 addons.go:238] Setting addon metrics-server=true in "newest-cni-610630"
	W0127 12:36:17.015562  534894 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:17.015556  534894 addons.go:69] Setting default-storageclass=true in profile "newest-cni-610630"
	I0127 12:36:17.015582  534894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-610630"
	I0127 12:36:17.015588  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.015521  534894 addons.go:69] Setting dashboard=true in profile "newest-cni-610630"
	I0127 12:36:17.015608  534894 addons.go:238] Setting addon dashboard=true in "newest-cni-610630"
	W0127 12:36:17.015617  534894 addons.go:247] addon dashboard should already be in state true
	I0127 12:36:17.015643  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.016040  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.016039  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.016050  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	W0127 12:36:17.015533  534894 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:17.016079  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.016082  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.016083  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.016420  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.016423  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.016450  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.031224  534894 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:17.032914  534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:17.036836  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0127 12:36:17.037340  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.037862  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.037882  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.038318  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.038866  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.038905  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.039846  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0127 12:36:17.040182  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.040873  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.040890  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.041292  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.041587  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.045301  534894 addons.go:238] Setting addon default-storageclass=true in "newest-cni-610630"
	W0127 12:36:17.045320  534894 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:17.045352  534894 host.go:66] Checking if "newest-cni-610630" exists ...
	I0127 12:36:17.045759  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.045799  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.048089  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0127 12:36:17.048729  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.049195  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.049213  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.049644  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.050180  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.050222  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.050700  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0127 12:36:17.051087  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.051560  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.051581  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.051971  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.052563  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.052600  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.065040  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0127 12:36:17.065537  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.066047  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.066072  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.066400  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.066556  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.068438  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.070276  534894 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:17.071684  534894 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:17.072821  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:17.072844  534894 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:17.072867  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.073985  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0127 12:36:17.074526  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.075082  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.075099  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.075677  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.076310  534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:17.076356  534894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:17.078889  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.079441  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.079463  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.079747  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.079954  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.080136  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.080333  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.091530  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I0127 12:36:17.092126  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.092669  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.092694  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.093285  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.093437  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.095189  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I0127 12:36:17.095304  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.095761  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.096341  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.096358  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.096828  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.097030  534894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:17.097195  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.097833  534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40641
	I0127 12:36:17.098239  534894 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:17.098254  534894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:17.098271  534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:17.098299  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.098871  534894 main.go:141] libmachine: Using API Version  1
	I0127 12:36:17.098889  534894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:17.099255  534894 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:17.099465  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.099541  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
	I0127 12:36:17.100856  534894 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:12.874242  532607 pod_ready.go:93] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.874282  532607 pod_ready.go:82] duration metric: took 9.010574512s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.874303  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.882689  532607 pod_ready.go:93] pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.882775  532607 pod_ready.go:82] duration metric: took 8.462495ms for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.882801  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.888659  532607 pod_ready.go:93] pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.888693  532607 pod_ready.go:82] duration metric: took 5.874272ms for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.888707  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-smp6l" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.894080  532607 pod_ready.go:93] pod "kube-proxy-smp6l" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.894141  532607 pod_ready.go:82] duration metric: took 5.425838ms for pod "kube-proxy-smp6l" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.894163  532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.900793  532607 pod_ready.go:93] pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:12.900849  532607 pod_ready.go:82] duration metric: took 6.668808ms for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:12.900869  532607 pod_ready.go:39] duration metric: took 9.044300135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:12.900904  532607 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:12.900995  532607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:12.922995  532607 api_server.go:72] duration metric: took 9.343524429s to wait for apiserver process to appear ...
	I0127 12:36:12.923066  532607 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:12.923097  532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
	I0127 12:36:12.930234  532607 api_server.go:279] https://192.168.50.206:8443/healthz returned 200:
	ok
	I0127 12:36:12.931482  532607 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:12.931504  532607 api_server.go:131] duration metric: took 8.421115ms to wait for apiserver health ...
	I0127 12:36:12.931513  532607 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:13.073659  532607 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:13.073701  532607 system_pods.go:61] "coredns-668d6bf9bc-46nfk" [ca146154-7693-43e5-ae2a-f0c3148327b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:13.073712  532607 system_pods.go:61] "coredns-668d6bf9bc-9p64b" [4d44d79e-ea3d-4085-9fb2-356746e71e9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:13.073722  532607 system_pods.go:61] "etcd-embed-certs-346100" [cb00782a-b078-43ee-aa3f-4806aa7629d6] Running
	I0127 12:36:13.073729  532607 system_pods.go:61] "kube-apiserver-embed-certs-346100" [7b0a8d77-4737-4bde-8e2a-2462c524f9a2] Running
	I0127 12:36:13.073735  532607 system_pods.go:61] "kube-controller-manager-embed-certs-346100" [196254b2-812b-43a4-ae10-d55a11957faf] Running
	I0127 12:36:13.073741  532607 system_pods.go:61] "kube-proxy-smp6l" [886c9cd4-795b-4e33-a16e-e12302c37665] Running
	I0127 12:36:13.073746  532607 system_pods.go:61] "kube-scheduler-embed-certs-346100" [90cbc1fe-52a3-45d8-a8e9-edc60f5c4829] Running
	I0127 12:36:13.073754  532607 system_pods.go:61] "metrics-server-f79f97bbb-w8fsn" [3a78ab43-37b0-4dc0-89a9-59a558ef997c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:13.073811  532607 system_pods.go:61] "storage-provisioner" [0d021617-8412-4f33-ba4f-2b3b458721ff] Running
	I0127 12:36:13.073828  532607 system_pods.go:74] duration metric: took 142.306493ms to wait for pod list to return data ...
	I0127 12:36:13.073848  532607 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:13.273298  532607 default_sa.go:45] found service account: "default"
	I0127 12:36:13.273415  532607 default_sa.go:55] duration metric: took 199.555226ms for default service account to be created ...
	I0127 12:36:13.273446  532607 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:36:13.477525  532607 system_pods.go:87] 9 kube-system pods found
	I0127 12:36:17.101529  534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
	I0127 12:36:17.101719  534894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:17.101731  534894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:17.101745  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.102276  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:17.102295  534894 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:17.102329  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
	I0127 12:36:17.102718  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.103291  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.103308  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.103462  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.103607  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.103729  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.103834  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.106885  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.107336  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.107361  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.107579  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.107585  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.107768  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.107957  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.108065  534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
	I0127 12:36:17.108184  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.108305  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
	I0127 12:36:17.108457  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
	I0127 12:36:17.108478  534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
	I0127 12:36:17.108587  534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
	I0127 12:36:17.108674  534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
	I0127 12:36:17.319272  534894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:17.355389  534894 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:17.355483  534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:17.383883  534894 api_server.go:72] duration metric: took 368.528555ms to wait for apiserver process to appear ...
	I0127 12:36:17.383915  534894 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:17.383940  534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 12:36:17.392047  534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0127 12:36:17.393460  534894 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:17.393491  534894 api_server.go:131] duration metric: took 9.56764ms to wait for apiserver health ...
	I0127 12:36:17.393503  534894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:17.419483  534894 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:17.419523  534894 system_pods.go:61] "coredns-668d6bf9bc-n6hwn" [24d3582e-97d0-4bb8-b12a-6f69ecd72309] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:17.419533  534894 system_pods.go:61] "coredns-668d6bf9bc-vg4bb" [3d20d4a5-8ddf-4166-af63-47beab76d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:36:17.419543  534894 system_pods.go:61] "etcd-newest-cni-610630" [0a812f8b-1e38-49a2-be17-9db3fb1979db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:36:17.419550  534894 system_pods.go:61] "kube-apiserver-newest-cni-610630" [292fb9b9-ccfb-49b5-892d-078e5897981d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:36:17.419559  534894 system_pods.go:61] "kube-controller-manager-newest-cni-610630" [36d1c2e6-42ce-4d4a-8bba-ff7beb6551ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:36:17.419565  534894 system_pods.go:61] "kube-proxy-8szpt" [11487c0b-6afa-4c83-9eb8-6a9f609f7b58] Running
	I0127 12:36:17.419574  534894 system_pods.go:61] "kube-scheduler-newest-cni-610630" [2f8744a9-54dc-4b5c-92f6-b0d3b9b0de7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:36:17.419582  534894 system_pods.go:61] "metrics-server-f79f97bbb-kcc5g" [6593df15-330d-4389-b878-45d396d718b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:17.419591  534894 system_pods.go:61] "storage-provisioner" [3cc7604e-fcbb-48e5-8445-82d5150b759f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:36:17.419601  534894 system_pods.go:74] duration metric: took 26.090469ms to wait for pod list to return data ...
	I0127 12:36:17.419614  534894 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:17.422917  534894 default_sa.go:45] found service account: "default"
	I0127 12:36:17.422941  534894 default_sa.go:55] duration metric: took 3.317044ms for default service account to be created ...
	I0127 12:36:17.422956  534894 kubeadm.go:582] duration metric: took 407.606907ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 12:36:17.422975  534894 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:36:17.429059  534894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:36:17.429091  534894 node_conditions.go:123] node cpu capacity is 2
	I0127 12:36:17.429116  534894 node_conditions.go:105] duration metric: took 6.133766ms to run NodePressure ...
	I0127 12:36:17.429138  534894 start.go:241] waiting for startup goroutines ...
	I0127 12:36:17.493751  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:17.493777  534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:17.496271  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:17.540289  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:17.540321  534894 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:17.595530  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:17.595565  534894 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:17.609027  534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:17.609055  534894 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:17.726024  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:17.764459  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:17.764492  534894 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:17.764569  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:17.852391  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:17.852429  534894 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:17.964392  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:17.964417  534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:18.185418  532844 kubeadm.go:310] [api-check] The API server is healthy after 6.002059282s
	I0127 12:36:18.204454  532844 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:36:18.218201  532844 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:36:18.245054  532844 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:36:18.245331  532844 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-887672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:36:18.257186  532844 kubeadm.go:310] [bootstrap-token] Using token: 5yhtlj.kyb5uzy41lrz34us
	I0127 12:36:18.258581  532844 out.go:235]   - Configuring RBAC rules ...
	I0127 12:36:18.258747  532844 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:36:18.265191  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:36:18.272296  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:36:18.285037  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:36:18.285204  532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:36:18.285313  532844 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:36:18.593364  532844 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:36:19.042942  532844 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:36:19.593432  532844 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:36:19.594797  532844 kubeadm.go:310] 
	I0127 12:36:19.594875  532844 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:36:19.594888  532844 kubeadm.go:310] 
	I0127 12:36:19.594970  532844 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:36:19.594981  532844 kubeadm.go:310] 
	I0127 12:36:19.595011  532844 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:36:19.595081  532844 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:36:19.595152  532844 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:36:19.595166  532844 kubeadm.go:310] 
	I0127 12:36:19.595239  532844 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:36:19.595246  532844 kubeadm.go:310] 
	I0127 12:36:19.595301  532844 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:36:19.595308  532844 kubeadm.go:310] 
	I0127 12:36:19.595371  532844 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:36:19.595464  532844 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:36:19.595545  532844 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:36:19.595554  532844 kubeadm.go:310] 
	I0127 12:36:19.595667  532844 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:36:19.595757  532844 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:36:19.595767  532844 kubeadm.go:310] 
	I0127 12:36:19.595869  532844 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5yhtlj.kyb5uzy41lrz34us \
	I0127 12:36:19.595998  532844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
	I0127 12:36:19.596017  532844 kubeadm.go:310] 	--control-plane 
	I0127 12:36:19.596021  532844 kubeadm.go:310] 
	I0127 12:36:19.596121  532844 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:36:19.596137  532844 kubeadm.go:310] 
	I0127 12:36:19.596223  532844 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5yhtlj.kyb5uzy41lrz34us \
	I0127 12:36:19.596305  532844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 
	I0127 12:36:19.598645  532844 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:36:19.598687  532844 cni.go:84] Creating CNI manager for ""
	I0127 12:36:19.598696  532844 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 12:36:19.600188  532844 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:36:18.113709  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:18.113742  534894 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:18.153599  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:18.153635  534894 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:18.176500  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:18.176539  534894 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:18.216973  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:18.217007  534894 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:18.274511  534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:18.274583  534894 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:18.342333  534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:18.361302  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:18.361342  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:18.361665  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:18.361699  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:18.361710  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:18.361719  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:18.362117  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:18.362140  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:18.362144  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:18.371041  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:18.371065  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:18.371339  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:18.371377  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.594328  534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.868263184s)
	I0127 12:36:19.594692  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.594482  534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.829887156s)
	I0127 12:36:19.594790  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.594804  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.595140  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:19.595208  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.595219  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.595238  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.595247  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.595556  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:19.595579  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.595600  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.595618  534894 addons.go:479] Verifying addon metrics-server=true in "newest-cni-610630"
	I0127 12:36:19.596388  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.596722  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.596754  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:19.596763  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:19.596770  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:19.597063  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:19.597086  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:19.597098  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:20.095246  534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.752863121s)
	I0127 12:36:20.095306  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:20.095324  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:20.095623  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:20.095685  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:20.095695  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:20.095711  534894 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:20.095721  534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
	I0127 12:36:20.096021  534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
	I0127 12:36:20.096038  534894 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:20.096055  534894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:20.097482  534894 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-610630 addons enable metrics-server
	
	I0127 12:36:20.098730  534894 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0127 12:36:20.099860  534894 addons.go:514] duration metric: took 3.084737287s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0127 12:36:20.099913  534894 start.go:246] waiting for cluster config update ...
	I0127 12:36:20.099934  534894 start.go:255] writing updated cluster config ...
	I0127 12:36:20.100260  534894 ssh_runner.go:195] Run: rm -f paused
	I0127 12:36:20.153018  534894 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:36:20.154413  534894 out.go:177] * Done! kubectl is now configured to use "newest-cni-610630" cluster and "default" namespace by default
	I0127 12:36:19.601391  532844 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:36:19.615483  532844 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:36:19.641045  532844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:36:19.641123  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:19.641161  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-887672 minikube.k8s.io/updated_at=2025_01_27T12_36_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=default-k8s-diff-port-887672 minikube.k8s.io/primary=true
	I0127 12:36:19.655315  532844 ops.go:34] apiserver oom_adj: -16
	I0127 12:36:19.893685  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:20.394472  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:20.893933  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:21.394823  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:21.893992  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:22.393950  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:22.894084  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:23.394506  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:23.893909  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:24.393790  532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:36:24.491305  532844 kubeadm.go:1113] duration metric: took 4.850249048s to wait for elevateKubeSystemPrivileges
	I0127 12:36:24.491356  532844 kubeadm.go:394] duration metric: took 4m37.901720321s to StartCluster
	I0127 12:36:24.491385  532844 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:24.491488  532844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:36:24.493752  532844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:36:24.494040  532844 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.130 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:36:24.494175  532844 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:36:24.494273  532844 config.go:182] Loaded profile config "default-k8s-diff-port-887672": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:36:24.494285  532844 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494323  532844 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-887672"
	I0127 12:36:24.494316  532844 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494338  532844 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494372  532844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-887672"
	I0127 12:36:24.494381  532844 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.494394  532844 addons.go:247] addon dashboard should already be in state true
	W0127 12:36:24.494332  532844 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:36:24.494432  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494463  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494323  532844 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-887672"
	I0127 12:36:24.494553  532844 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.494564  532844 addons.go:247] addon metrics-server should already be in state true
	I0127 12:36:24.494598  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.494863  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494871  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494871  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.494905  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.494911  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495037  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.495049  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495123  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.495481  532844 out.go:177] * Verifying Kubernetes components...
	I0127 12:36:24.496811  532844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:36:24.513577  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0127 12:36:24.514115  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.514694  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.514720  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.515161  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.515484  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
	I0127 12:36:24.515836  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0127 12:36:24.515999  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0127 12:36:24.516094  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.516144  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.516192  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.516413  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.516675  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.516695  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.516974  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.516994  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.517001  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.517393  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.517583  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.517647  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.518197  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.518252  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.518469  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.518494  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.518868  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.519422  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.519470  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.521629  532844 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-887672"
	W0127 12:36:24.521653  532844 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:36:24.521684  532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
	I0127 12:36:24.522040  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.522081  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.534712  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0127 12:36:24.535195  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.536504  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.536527  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.536554  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0127 12:36:24.536902  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.536959  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.537111  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.537597  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.537616  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.537969  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.538145  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.538989  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0127 12:36:24.539580  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.540009  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0127 12:36:24.540196  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.540422  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.540715  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.540879  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.540902  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.540934  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.540948  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.541341  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.541388  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.541685  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.542042  532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:36:24.542090  532844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:36:24.542251  532844 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:36:24.542373  532844 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:36:24.543206  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.543412  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:36:24.543430  532844 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:36:24.543460  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.544493  532844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:36:24.545545  532844 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:36:24.545643  532844 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:24.545656  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:36:24.545671  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.546541  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:36:24.546563  532844 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:36:24.546584  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.547093  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.547276  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.547478  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.547900  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.548065  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.547944  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.548278  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.549918  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550146  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.550170  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550429  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.550517  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.550608  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.550758  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.550914  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.550956  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.550993  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.551165  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.551308  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.551460  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.551595  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.566621  532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0127 12:36:24.567007  532844 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:36:24.567434  532844 main.go:141] libmachine: Using API Version  1
	I0127 12:36:24.567460  532844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:36:24.567879  532844 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:36:24.568040  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
	I0127 12:36:24.569632  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
	I0127 12:36:24.569844  532844 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:24.569859  532844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:36:24.569875  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
	I0127 12:36:24.572937  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.573361  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
	I0127 12:36:24.573377  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
	I0127 12:36:24.573577  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
	I0127 12:36:24.573757  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
	I0127 12:36:24.573888  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
	I0127 12:36:24.574044  532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
	I0127 12:36:24.747290  532844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:36:24.779846  532844 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-887672" to be "Ready" ...
	I0127 12:36:24.813551  532844 node_ready.go:49] node "default-k8s-diff-port-887672" has status "Ready":"True"
	I0127 12:36:24.813582  532844 node_ready.go:38] duration metric: took 33.68566ms for node "default-k8s-diff-port-887672" to be "Ready" ...
	I0127 12:36:24.813594  532844 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:24.825398  532844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:24.855841  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:36:24.855869  532844 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:36:24.865288  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:36:24.890399  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:36:24.907963  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:36:24.907990  532844 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:36:24.923409  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:36:24.923434  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:36:24.967186  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:36:24.967211  532844 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:36:25.003133  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:36:25.003167  532844 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:36:25.031491  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:36:25.031515  532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:36:25.086171  532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:25.086201  532844 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:36:25.147825  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:36:25.152298  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:36:25.152324  532844 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:36:25.203235  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:36:25.203264  532844 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:36:25.242547  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:36:25.242578  532844 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:36:25.281622  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:36:25.281659  532844 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:36:25.312416  532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:25.312444  532844 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:36:25.365802  532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:36:25.651534  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.651566  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.651590  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.651612  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.651995  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652009  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652020  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652021  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652033  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.652036  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652040  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.652047  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652055  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.652063  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.652511  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652572  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652594  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.652580  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:25.652592  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.652796  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.667377  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.667403  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.667693  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.667709  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.974214  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.974246  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.974553  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.974574  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.974591  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:25.974600  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:25.974992  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:25.975017  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:25.975032  532844 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-887672"
	I0127 12:36:26.960702  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:27.097489  532844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.731632212s)
	I0127 12:36:27.097551  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:27.097567  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:27.097886  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:27.097909  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:27.097909  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
	I0127 12:36:27.097917  532844 main.go:141] libmachine: Making call to close driver server
	I0127 12:36:27.097935  532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
	I0127 12:36:27.098221  532844 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:36:27.098291  532844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:36:27.099837  532844 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-887672 addons enable metrics-server
	
	I0127 12:36:27.101354  532844 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 12:36:27.102395  532844 addons.go:514] duration metric: took 2.608238219s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 12:36:29.331790  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:31.334726  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:33.834237  532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:34.374688  532844 pod_ready.go:93] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.374713  532844 pod_ready.go:82] duration metric: took 9.549290033s for pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.374725  532844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.399299  532844 pod_ready.go:93] pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.399323  532844 pod_ready.go:82] duration metric: took 24.589743ms for pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.399332  532844 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.421329  532844 pod_ready.go:93] pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.421359  532844 pod_ready.go:82] duration metric: took 22.019877ms for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.421399  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.427922  532844 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.427946  532844 pod_ready.go:82] duration metric: took 6.537775ms for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.427957  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.447675  532844 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.447701  532844 pod_ready.go:82] duration metric: took 19.736139ms for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.447713  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xl46c" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.729783  532844 pod_ready.go:93] pod "kube-proxy-xl46c" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:34.729827  532844 pod_ready.go:82] duration metric: took 282.092476ms for pod "kube-proxy-xl46c" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:34.729841  532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:35.128755  532844 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:35.128781  532844 pod_ready.go:82] duration metric: took 398.931642ms for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:35.128790  532844 pod_ready.go:39] duration metric: took 10.315186396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:35.128806  532844 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:35.128870  532844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:35.148548  532844 api_server.go:72] duration metric: took 10.654456335s to wait for apiserver process to appear ...
	I0127 12:36:35.148574  532844 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:35.148597  532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
	I0127 12:36:35.156175  532844 api_server.go:279] https://192.168.61.130:8444/healthz returned 200:
	ok
	I0127 12:36:35.157842  532844 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:35.157866  532844 api_server.go:131] duration metric: took 9.283401ms to wait for apiserver health ...
	I0127 12:36:35.157875  532844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:35.339567  532844 system_pods.go:59] 9 kube-system pods found
	I0127 12:36:35.339606  532844 system_pods.go:61] "coredns-668d6bf9bc-jc882" [cc7b1851-f0b2-406d-b972-155b02dcefc6] Running
	I0127 12:36:35.339614  532844 system_pods.go:61] "coredns-668d6bf9bc-s6rln" [553e1b5c-1bb3-48f4-bf25-6837dae6b627] Running
	I0127 12:36:35.339620  532844 system_pods.go:61] "etcd-default-k8s-diff-port-887672" [cfe71b01-c4c5-4772-904f-0f22ebdc9481] Running
	I0127 12:36:35.339625  532844 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-887672" [09952f8b-2235-45c2-aac8-328369a341dd] Running
	I0127 12:36:35.339631  532844 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-887672" [6aee732f-0e4f-4362-b2d5-38e533a146c4] Running
	I0127 12:36:35.339636  532844 system_pods.go:61] "kube-proxy-xl46c" [c2ddd14b-3d9e-4985-935e-5f64d188e68e] Running
	I0127 12:36:35.339641  532844 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-887672" [7a436b79-cc6a-4311-9cb6-24537ed6aed0] Running
	I0127 12:36:35.339652  532844 system_pods.go:61] "metrics-server-f79f97bbb-twqz4" [107a2af6-937d-4c95-a8dd-f47f59dd3afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:36:35.339659  532844 system_pods.go:61] "storage-provisioner" [ebd493f5-ab93-4083-8174-aceb44741e99] Running
	I0127 12:36:35.339675  532844 system_pods.go:74] duration metric: took 181.791009ms to wait for pod list to return data ...
	I0127 12:36:35.339689  532844 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:36:35.528977  532844 default_sa.go:45] found service account: "default"
	I0127 12:36:35.529018  532844 default_sa.go:55] duration metric: took 189.31757ms for default service account to be created ...
	I0127 12:36:35.529033  532844 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:36:35.732388  532844 system_pods.go:87] 9 kube-system pods found
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	cd179f17a7d12       523cad1a4df73       50 seconds ago      Exited              dashboard-metrics-scraper   9                   5e7b08c0d5949       dashboard-metrics-scraper-86c6bf9756-4595h
	9d6d9aeb4ff44       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   1db7658f33f3f       kubernetes-dashboard-7779f9b69b-dcvkg
	29c6dfc5d44a1       c69fa2e9cbf5f       22 minutes ago      Running             coredns                     0                   dd5c4df6f6f4c       coredns-668d6bf9bc-s6rln
	f5aeb8c98c5e3       c69fa2e9cbf5f       22 minutes ago      Running             coredns                     0                   861ce0c79dd2c       coredns-668d6bf9bc-jc882
	5284bc00a9c6c       6e38f40d628db       22 minutes ago      Running             storage-provisioner         0                   ffd80b50aa8e5       storage-provisioner
	e83a2d888139c       e29f9c7391fd9       22 minutes ago      Running             kube-proxy                  0                   3d0790356b9f4       kube-proxy-xl46c
	01a7ea5c124b2       019ee182b58e2       22 minutes ago      Running             kube-controller-manager     2                   f9679a692b213       kube-controller-manager-default-k8s-diff-port-887672
	00f284350d3de       2b0d6572d062c       22 minutes ago      Running             kube-scheduler              2                   4c655687b7788       kube-scheduler-default-k8s-diff-port-887672
	f966b9a94f4c0       95c0bda56fc4d       22 minutes ago      Running             kube-apiserver              2                   1e6ad9d2905bd       kube-apiserver-default-k8s-diff-port-887672
	f628b6160f6a6       a9e7e6b294baf       22 minutes ago      Running             etcd                        2                   c2d035c01c724       etcd-default-k8s-diff-port-887672
	
	
	==> containerd <==
	Jan 27 12:52:18 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:18.961040544Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 12:52:18 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:18.962971375Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 12:52:18 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:18.963024312Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 12:52:28 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:28.953803019Z" level=info msg="CreateContainer within sandbox \"5e7b08c0d5949f30581496ee21eb96e8912b52a35523d8f9856d08c427d1b274\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 27 12:52:28 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:28.976835713Z" level=info msg="CreateContainer within sandbox \"5e7b08c0d5949f30581496ee21eb96e8912b52a35523d8f9856d08c427d1b274\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"6cd38647e328734831653251863de6e9b13a8a72ae4ddf366000e0065a94e794\""
	Jan 27 12:52:28 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:28.977695083Z" level=info msg="StartContainer for \"6cd38647e328734831653251863de6e9b13a8a72ae4ddf366000e0065a94e794\""
	Jan 27 12:52:29 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:29.051543269Z" level=info msg="StartContainer for \"6cd38647e328734831653251863de6e9b13a8a72ae4ddf366000e0065a94e794\" returns successfully"
	Jan 27 12:52:29 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:29.095499208Z" level=info msg="shim disconnected" id=6cd38647e328734831653251863de6e9b13a8a72ae4ddf366000e0065a94e794 namespace=k8s.io
	Jan 27 12:52:29 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:29.095603183Z" level=warning msg="cleaning up after shim disconnected" id=6cd38647e328734831653251863de6e9b13a8a72ae4ddf366000e0065a94e794 namespace=k8s.io
	Jan 27 12:52:29 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:29.095639832Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:52:29 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:29.443419962Z" level=info msg="RemoveContainer for \"9fc3ffe96ffca6de2c04ac6a992454bb61e77a16abd9ab20c5b8673d585fa99a\""
	Jan 27 12:52:29 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:52:29.449692402Z" level=info msg="RemoveContainer for \"9fc3ffe96ffca6de2c04ac6a992454bb61e77a16abd9ab20c5b8673d585fa99a\" returns successfully"
	Jan 27 12:57:21 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:21.951658616Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:57:21 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:21.959626668Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 27 12:57:21 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:21.961539056Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 27 12:57:21 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:21.961579827Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 12:57:40 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:40.953855555Z" level=info msg="CreateContainer within sandbox \"5e7b08c0d5949f30581496ee21eb96e8912b52a35523d8f9856d08c427d1b274\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 27 12:57:40 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:40.975836207Z" level=info msg="CreateContainer within sandbox \"5e7b08c0d5949f30581496ee21eb96e8912b52a35523d8f9856d08c427d1b274\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"cd179f17a7d124bdc760200c6f50524fc873cfdfe521df8808c1717f465a6e07\""
	Jan 27 12:57:40 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:40.976449163Z" level=info msg="StartContainer for \"cd179f17a7d124bdc760200c6f50524fc873cfdfe521df8808c1717f465a6e07\""
	Jan 27 12:57:41 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:41.035554631Z" level=info msg="StartContainer for \"cd179f17a7d124bdc760200c6f50524fc873cfdfe521df8808c1717f465a6e07\" returns successfully"
	Jan 27 12:57:41 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:41.077566020Z" level=info msg="shim disconnected" id=cd179f17a7d124bdc760200c6f50524fc873cfdfe521df8808c1717f465a6e07 namespace=k8s.io
	Jan 27 12:57:41 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:41.077836004Z" level=warning msg="cleaning up after shim disconnected" id=cd179f17a7d124bdc760200c6f50524fc873cfdfe521df8808c1717f465a6e07 namespace=k8s.io
	Jan 27 12:57:41 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:41.077846971Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:57:41 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:41.149189621Z" level=info msg="RemoveContainer for \"6cd38647e328734831653251863de6e9b13a8a72ae4ddf366000e0065a94e794\""
	Jan 27 12:57:41 default-k8s-diff-port-887672 containerd[559]: time="2025-01-27T12:57:41.158160020Z" level=info msg="RemoveContainer for \"6cd38647e328734831653251863de6e9b13a8a72ae4ddf366000e0065a94e794\" returns successfully"
	
	
	==> coredns [29c6dfc5d44a10bada1040358a5b1e0106c14938facebda56b6f692c2e41482c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f5aeb8c98c5e30c3e2a9f05f54a2b6ee495233c95869ae70b80c9974b434ef90] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-887672
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-887672
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=default-k8s-diff-port-887672
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_36_19_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:36:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-887672
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:58:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:56:53 +0000   Mon, 27 Jan 2025 12:36:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:56:53 +0000   Mon, 27 Jan 2025 12:36:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:56:53 +0000   Mon, 27 Jan 2025 12:36:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:56:53 +0000   Mon, 27 Jan 2025 12:36:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.130
	  Hostname:    default-k8s-diff-port-887672
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9383f6ea38e44b36bc66f0db6519eb9e
	  System UUID:                9383f6ea-38e4-4b36-bc66-f0db6519eb9e
	  Boot ID:                    82f5285e-db64-403f-9241-0cc076002ea0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-jc882                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 coredns-668d6bf9bc-s6rln                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-default-k8s-diff-port-887672                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-default-k8s-diff-port-887672             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-887672    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-xl46c                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-default-k8s-diff-port-887672             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-f79f97bbb-twqz4                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-4595h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-dcvkg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22m   kube-proxy       
	  Normal  Starting                 22m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m   kubelet          Node default-k8s-diff-port-887672 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m   kubelet          Node default-k8s-diff-port-887672 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m   kubelet          Node default-k8s-diff-port-887672 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m   node-controller  Node default-k8s-diff-port-887672 event: Registered Node default-k8s-diff-port-887672 in Controller
	
	
	==> dmesg <==
	[  +0.052299] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038385] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.954281] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.240580] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586145] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.476513] systemd-fstab-generator[482]: Ignoring "noauto" option for root device
	[  +0.059700] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053893] systemd-fstab-generator[494]: Ignoring "noauto" option for root device
	[  +0.157407] systemd-fstab-generator[508]: Ignoring "noauto" option for root device
	[  +0.135897] systemd-fstab-generator[520]: Ignoring "noauto" option for root device
	[  +0.313189] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +1.141690] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +1.845520] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +5.697647] kauditd_printk_skb: 265 callbacks suppressed
	[Jan27 12:32] kauditd_printk_skb: 91 callbacks suppressed
	[Jan27 12:36] systemd-fstab-generator[3023]: Ignoring "noauto" option for root device
	[  +7.121714] systemd-fstab-generator[3398]: Ignoring "noauto" option for root device
	[  +0.081597] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.833464] systemd-fstab-generator[3526]: Ignoring "noauto" option for root device
	[  +0.014545] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.503281] kauditd_printk_skb: 112 callbacks suppressed
	[  +7.549202] kauditd_printk_skb: 1 callbacks suppressed
	[ +15.366047] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [f628b6160f6a6ac2da7ff17389df02a3ea03f8c6c64a6261d920df4de2399a05] <==
	{"level":"info","ts":"2025-01-27T12:36:13.884939Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:36:13.885044Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:36:13.882649Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:36:13.886010Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:36:13.886126Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T12:36:13.886216Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T12:36:13.894257Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:36:13.897240Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T12:36:13.895194Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:36:13.898201Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.130:2379"}
	{"level":"warn","ts":"2025-01-27T12:36:16.825521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.154683ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2534001188604122110 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:node-bootstrapper\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:node-bootstrapper\" value_size:565 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-01-27T12:36:16.826484Z","caller":"traceutil/trace.go:171","msg":"trace[1612838225] transaction","detail":"{read_only:false; response_revision:86; number_of_response:1; }","duration":"253.242512ms","start":"2025-01-27T12:36:16.573221Z","end":"2025-01-27T12:36:16.826464Z","steps":["trace[1612838225] 'process raft request'  (duration: 84.65007ms)","trace[1612838225] 'compare'  (duration: 167.053357ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:36:32.888370Z","caller":"traceutil/trace.go:171","msg":"trace[1861807725] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"149.374098ms","start":"2025-01-27T12:36:32.738963Z","end":"2025-01-27T12:36:32.888337Z","steps":["trace[1861807725] 'process raft request'  (duration: 144.30338ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:36:34.339549Z","caller":"traceutil/trace.go:171","msg":"trace[1827710932] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"172.601869ms","start":"2025-01-27T12:36:34.166928Z","end":"2025-01-27T12:36:34.339530Z","steps":["trace[1827710932] 'process raft request'  (duration: 172.417535ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:36:38.121871Z","caller":"traceutil/trace.go:171","msg":"trace[434117448] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"143.829499ms","start":"2025-01-27T12:36:37.978027Z","end":"2025-01-27T12:36:38.121856Z","steps":["trace[434117448] 'process raft request'  (duration: 143.739027ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:36:38.130138Z","caller":"traceutil/trace.go:171","msg":"trace[597138632] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"151.689687ms","start":"2025-01-27T12:36:37.978430Z","end":"2025-01-27T12:36:38.130119Z","steps":["trace[597138632] 'process raft request'  (duration: 151.45289ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:46:13.938797Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":828}
	{"level":"info","ts":"2025-01-27T12:46:13.974174Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":828,"took":"33.907675ms","hash":3974087519,"current-db-size-bytes":2834432,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2834432,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-01-27T12:46:13.974345Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3974087519,"revision":828,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T12:51:13.945928Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1081}
	{"level":"info","ts":"2025-01-27T12:51:13.950125Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1081,"took":"3.702625ms","hash":2199425907,"current-db-size-bytes":2834432,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1753088,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:51:13.950177Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2199425907,"revision":1081,"compact-revision":828}
	{"level":"info","ts":"2025-01-27T12:56:13.953870Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1332}
	{"level":"info","ts":"2025-01-27T12:56:13.958472Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1332,"took":"3.986971ms","hash":2517371901,"current-db-size-bytes":2834432,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1802240,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:56:13.958608Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2517371901,"revision":1332,"compact-revision":1081}
	
	
	==> kernel <==
	 12:58:31 up 27 min,  0 users,  load average: 0.03, 0.11, 0.15
	Linux default-k8s-diff-port-887672 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f966b9a94f4c0841d94a62ed35f74817624d321388bc764618d6552f381b48c5] <==
	I0127 12:54:16.640096       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:54:16.640221       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:56:15.635135       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:56:15.635509       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:56:16.637423       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:56:16.637757       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:56:16.637899       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:56:16.638105       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:56:16.639398       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:56:16.639514       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:57:16.640110       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:57:16.640331       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:57:16.640590       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:57:16.640851       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:57:16.641653       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:57:16.643015       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [01a7ea5c124b2bed6f9e62b02e38159f93b98ceb06845b5f775367c032619ec9] <==
	E0127 12:53:53.404965       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:53:53.533182       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:54:23.411423       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:54:23.540224       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:54:53.418288       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:54:53.549497       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:55:23.424289       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:55:23.562056       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:55:53.432278       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:55:53.569563       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:56:23.441126       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:56:23.577814       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:56:53.447994       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:56:53.585671       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-887672"
	I0127 12:56:53.588017       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:57:23.454598       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:57:23.598877       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:57:33.964767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="290.586µs"
	I0127 12:57:41.163327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="138.213µs"
	I0127 12:57:42.861199       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="84.022µs"
	I0127 12:57:46.966475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="164.862µs"
	E0127 12:57:53.460148       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:57:53.605930       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:58:23.466197       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:58:23.613527       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [e83a2d888139c302c09fd1b773c33241661af03856010138582c541e7309fbbe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:36:24.965227       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:36:24.978139       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.130"]
	E0127 12:36:24.978211       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:25.067557       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:25.067789       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:25.067923       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:25.073930       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:25.076270       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:25.076282       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:25.078893       1 config.go:199] "Starting service config controller"
	I0127 12:36:25.080909       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:25.080946       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:25.080950       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:25.081589       1 config.go:329] "Starting node config controller"
	I0127 12:36:25.081595       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:25.181033       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:25.181123       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:25.181790       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00f284350d3de24a9e340631aed8abe60664abafe6a698d6c8d303b959ba9d8e] <==
	W0127 12:36:16.735621       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:36:16.735988       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:36:16.741886       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 12:36:16.741928       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 12:36:16.801073       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 12:36:16.801131       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:36:16.809298       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 12:36:16.809614       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:36:16.944016       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 12:36:16.944064       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:36:16.969779       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 12:36:16.969830       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:36:16.991990       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 12:36:16.992510       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 12:36:17.029959       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 12:36:17.030155       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:36:17.036883       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 12:36:17.036961       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:36:17.161033       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 12:36:17.161204       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:36:17.192014       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 12:36:17.192251       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:36:17.267354       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 12:36:17.267403       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:18.445976       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:57:21 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:57:21.963680    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-twqz4" podUID="107a2af6-937d-4c95-a8dd-f47f59dd3afb"
	Jan 27 12:57:27 default-k8s-diff-port-887672 kubelet[3404]: I0127 12:57:27.949773    3404 scope.go:117] "RemoveContainer" containerID="6cd38647e328734831653251863de6e9b13a8a72ae4ddf366000e0065a94e794"
	Jan 27 12:57:27 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:57:27.950341    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-4595h_kubernetes-dashboard(74bcf07c-65d1-456f-aaab-50f73cec2d9e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-4595h" podUID="74bcf07c-65d1-456f-aaab-50f73cec2d9e"
	Jan 27 12:57:33 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:57:33.950833    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-twqz4" podUID="107a2af6-937d-4c95-a8dd-f47f59dd3afb"
	Jan 27 12:57:40 default-k8s-diff-port-887672 kubelet[3404]: I0127 12:57:40.950253    3404 scope.go:117] "RemoveContainer" containerID="6cd38647e328734831653251863de6e9b13a8a72ae4ddf366000e0065a94e794"
	Jan 27 12:57:41 default-k8s-diff-port-887672 kubelet[3404]: I0127 12:57:41.147382    3404 scope.go:117] "RemoveContainer" containerID="6cd38647e328734831653251863de6e9b13a8a72ae4ddf366000e0065a94e794"
	Jan 27 12:57:41 default-k8s-diff-port-887672 kubelet[3404]: I0127 12:57:41.147661    3404 scope.go:117] "RemoveContainer" containerID="cd179f17a7d124bdc760200c6f50524fc873cfdfe521df8808c1717f465a6e07"
	Jan 27 12:57:41 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:57:41.147995    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-4595h_kubernetes-dashboard(74bcf07c-65d1-456f-aaab-50f73cec2d9e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-4595h" podUID="74bcf07c-65d1-456f-aaab-50f73cec2d9e"
	Jan 27 12:57:42 default-k8s-diff-port-887672 kubelet[3404]: I0127 12:57:42.848270    3404 scope.go:117] "RemoveContainer" containerID="cd179f17a7d124bdc760200c6f50524fc873cfdfe521df8808c1717f465a6e07"
	Jan 27 12:57:42 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:57:42.848477    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-4595h_kubernetes-dashboard(74bcf07c-65d1-456f-aaab-50f73cec2d9e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-4595h" podUID="74bcf07c-65d1-456f-aaab-50f73cec2d9e"
	Jan 27 12:57:46 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:57:46.952296    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-twqz4" podUID="107a2af6-937d-4c95-a8dd-f47f59dd3afb"
	Jan 27 12:57:57 default-k8s-diff-port-887672 kubelet[3404]: I0127 12:57:57.949891    3404 scope.go:117] "RemoveContainer" containerID="cd179f17a7d124bdc760200c6f50524fc873cfdfe521df8808c1717f465a6e07"
	Jan 27 12:57:57 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:57:57.950146    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-4595h_kubernetes-dashboard(74bcf07c-65d1-456f-aaab-50f73cec2d9e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-4595h" podUID="74bcf07c-65d1-456f-aaab-50f73cec2d9e"
	Jan 27 12:58:00 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:58:00.951135    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-twqz4" podUID="107a2af6-937d-4c95-a8dd-f47f59dd3afb"
	Jan 27 12:58:12 default-k8s-diff-port-887672 kubelet[3404]: I0127 12:58:12.949413    3404 scope.go:117] "RemoveContainer" containerID="cd179f17a7d124bdc760200c6f50524fc873cfdfe521df8808c1717f465a6e07"
	Jan 27 12:58:12 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:58:12.950130    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-4595h_kubernetes-dashboard(74bcf07c-65d1-456f-aaab-50f73cec2d9e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-4595h" podUID="74bcf07c-65d1-456f-aaab-50f73cec2d9e"
	Jan 27 12:58:12 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:58:12.950822    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-twqz4" podUID="107a2af6-937d-4c95-a8dd-f47f59dd3afb"
	Jan 27 12:58:18 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:58:18.967986    3404 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:58:18 default-k8s-diff-port-887672 kubelet[3404]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:58:18 default-k8s-diff-port-887672 kubelet[3404]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:58:18 default-k8s-diff-port-887672 kubelet[3404]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:58:18 default-k8s-diff-port-887672 kubelet[3404]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:58:25 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:58:25.950318    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-twqz4" podUID="107a2af6-937d-4c95-a8dd-f47f59dd3afb"
	Jan 27 12:58:25 default-k8s-diff-port-887672 kubelet[3404]: I0127 12:58:25.950819    3404 scope.go:117] "RemoveContainer" containerID="cd179f17a7d124bdc760200c6f50524fc873cfdfe521df8808c1717f465a6e07"
	Jan 27 12:58:25 default-k8s-diff-port-887672 kubelet[3404]: E0127 12:58:25.951066    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-4595h_kubernetes-dashboard(74bcf07c-65d1-456f-aaab-50f73cec2d9e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-4595h" podUID="74bcf07c-65d1-456f-aaab-50f73cec2d9e"
	
	
	==> kubernetes-dashboard [9d6d9aeb4ff44eae5d4d8911f2b751ce6811813322689e34bbdb1af502e02c46] <==
	2025/01/27 12:46:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:46:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:47:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:47:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:48:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:48:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:49:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:49:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:50:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:50:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:51:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:51:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:52:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:52:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:53:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:53:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:54:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:54:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:55:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:55:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:56:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:56:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:57:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:57:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:58:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5284bc00a9c6c55f53a26f978f5e6d727505b755cb52d9c6e8eb4d846e194226] <==
	I0127 12:36:26.545190       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 12:36:26.633016       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 12:36:26.633196       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 12:36:26.764480       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 12:36:26.764987       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-887672_37f55b54-5058-4ec6-bd1b-1950b229461e!
	I0127 12:36:26.765195       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9fe32020-efba-450a-bf8d-85e20f6375a7", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-887672_37f55b54-5058-4ec6-bd1b-1950b229461e became leader
	I0127 12:36:26.876814       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-887672_37f55b54-5058-4ec6-bd1b-1950b229461e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-887672 -n default-k8s-diff-port-887672
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-887672 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-twqz4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-887672 describe pod metrics-server-f79f97bbb-twqz4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-887672 describe pod metrics-server-f79f97bbb-twqz4: exit status 1 (60.507553ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-twqz4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-887672 describe pod metrics-server-f79f97bbb-twqz4: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1629.91s)

                                                
                                    

Test pass (275/316)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.47
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 12.47
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 83.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 210.95
29 TestAddons/serial/Volcano 41.75
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.51
35 TestAddons/parallel/Registry 22.33
36 TestAddons/parallel/Ingress 19.96
37 TestAddons/parallel/InspektorGadget 10.81
38 TestAddons/parallel/MetricsServer 5.71
40 TestAddons/parallel/CSI 58.06
41 TestAddons/parallel/Headlamp 28.05
42 TestAddons/parallel/CloudSpanner 6.59
43 TestAddons/parallel/LocalPath 21.12
44 TestAddons/parallel/NvidiaDevicePlugin 6.52
45 TestAddons/parallel/Yakd 10.79
47 TestAddons/StoppedEnableDisable 91.24
48 TestCertOptions 53.46
49 TestCertExpiration 324.16
51 TestForceSystemdFlag 94.81
52 TestForceSystemdEnv 44.14
54 TestKVMDriverInstallOrUpdate 4.25
58 TestErrorSpam/setup 38.09
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.76
61 TestErrorSpam/pause 1.48
62 TestErrorSpam/unpause 1.62
63 TestErrorSpam/stop 5.12
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 52.72
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 43.06
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.56
75 TestFunctional/serial/CacheCmd/cache/add_local 2.33
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 41.49
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.22
86 TestFunctional/serial/LogsFileCmd 1.27
87 TestFunctional/serial/InvalidService 4.11
89 TestFunctional/parallel/ConfigCmd 0.4
90 TestFunctional/parallel/DashboardCmd 30.37
91 TestFunctional/parallel/DryRun 0.34
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.83
97 TestFunctional/parallel/ServiceCmdConnect 9.49
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 42.49
101 TestFunctional/parallel/SSHCmd 0.42
102 TestFunctional/parallel/CpCmd 1.29
103 TestFunctional/parallel/MySQL 26.18
104 TestFunctional/parallel/FileSync 0.3
105 TestFunctional/parallel/CertSync 1.33
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
113 TestFunctional/parallel/License 1.17
123 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
125 TestFunctional/parallel/ProfileCmd/profile_list 0.39
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
127 TestFunctional/parallel/MountCmd/any-port 8.7
128 TestFunctional/parallel/MountCmd/specific-port 1.87
129 TestFunctional/parallel/ServiceCmd/List 0.28
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
132 TestFunctional/parallel/ServiceCmd/Format 0.33
133 TestFunctional/parallel/ServiceCmd/URL 0.42
134 TestFunctional/parallel/MountCmd/VerifyCleanup 0.95
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
142 TestFunctional/parallel/ImageCommands/ImageBuild 4.88
143 TestFunctional/parallel/ImageCommands/Setup 1.82
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.78
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.21
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.27
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
151 TestFunctional/parallel/Version/short 0.05
152 TestFunctional/parallel/Version/components 0.42
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 187.86
160 TestMultiControlPlane/serial/DeployApp 5.62
161 TestMultiControlPlane/serial/PingHostFromPods 1.17
162 TestMultiControlPlane/serial/AddWorkerNode 55.95
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
165 TestMultiControlPlane/serial/CopyFile 12.78
166 TestMultiControlPlane/serial/StopSecondaryNode 91.64
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
168 TestMultiControlPlane/serial/RestartSecondaryNode 41.2
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 463.7
171 TestMultiControlPlane/serial/DeleteSecondaryNode 6.66
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
173 TestMultiControlPlane/serial/StopCluster 272.91
174 TestMultiControlPlane/serial/RestartCluster 124.73
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
176 TestMultiControlPlane/serial/AddSecondaryNode 73.27
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
181 TestJSONOutput/start/Command 55.11
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.66
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.57
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.45
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 86.05
213 TestMountStart/serial/StartWithMountFirst 28.88
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 27.84
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.69
218 TestMountStart/serial/VerifyMountPostDelete 0.38
219 TestMountStart/serial/Stop 1.29
220 TestMountStart/serial/RestartStopped 23.67
221 TestMountStart/serial/VerifyMountPostStop 0.38
224 TestMultiNode/serial/FreshStart2Nodes 108.44
225 TestMultiNode/serial/DeployApp2Nodes 4.95
226 TestMultiNode/serial/PingHostFrom2Pods 0.76
227 TestMultiNode/serial/AddNode 52.1
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.58
230 TestMultiNode/serial/CopyFile 7.04
231 TestMultiNode/serial/StopNode 2.11
232 TestMultiNode/serial/StartAfterStop 33.38
233 TestMultiNode/serial/RestartKeepsNodes 310.94
234 TestMultiNode/serial/DeleteNode 2.13
235 TestMultiNode/serial/StopMultiNode 181.86
236 TestMultiNode/serial/RestartMultiNode 106.95
237 TestMultiNode/serial/ValidateNameConflict 42.87
242 TestPreload 204.55
244 TestScheduledStopUnix 112.98
248 TestRunningBinaryUpgrade 165.99
250 TestKubernetesUpgrade 243.14
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/StartWithK8s 119.8
255 TestNoKubernetes/serial/StartWithStopK8s 72.25
256 TestNoKubernetes/serial/Start 29.27
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
258 TestNoKubernetes/serial/ProfileList 1.06
259 TestNoKubernetes/serial/Stop 1.29
260 TestNoKubernetes/serial/StartNoArgs 63.35
261 TestStoppedBinaryUpgrade/Setup 2.21
262 TestStoppedBinaryUpgrade/Upgrade 112.12
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
271 TestNetworkPlugins/group/false 3.4
283 TestPause/serial/Start 64.72
284 TestNetworkPlugins/group/auto/Start 76.94
285 TestPause/serial/SecondStartNoReconfiguration 56.43
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
287 TestNetworkPlugins/group/kindnet/Start 67.59
288 TestNetworkPlugins/group/calico/Start 92.17
289 TestNetworkPlugins/group/auto/KubeletFlags 0.33
290 TestNetworkPlugins/group/auto/NetCatPod 12.16
291 TestPause/serial/Pause 0.72
292 TestPause/serial/VerifyStatus 0.26
293 TestPause/serial/Unpause 0.65
294 TestPause/serial/PauseAgain 0.79
295 TestPause/serial/DeletePaused 1.01
296 TestPause/serial/VerifyDeletedResources 0.78
297 TestNetworkPlugins/group/auto/DNS 0.2
298 TestNetworkPlugins/group/auto/Localhost 0.14
299 TestNetworkPlugins/group/auto/HairPin 0.14
300 TestNetworkPlugins/group/custom-flannel/Start 83.38
301 TestNetworkPlugins/group/enable-default-cni/Start 83.26
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
304 TestNetworkPlugins/group/kindnet/NetCatPod 8.21
305 TestNetworkPlugins/group/kindnet/DNS 0.18
306 TestNetworkPlugins/group/kindnet/Localhost 0.12
307 TestNetworkPlugins/group/kindnet/HairPin 0.14
308 TestNetworkPlugins/group/flannel/Start 85.44
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.22
311 TestNetworkPlugins/group/calico/NetCatPod 9.24
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
314 TestNetworkPlugins/group/calico/DNS 0.16
315 TestNetworkPlugins/group/calico/Localhost 0.15
316 TestNetworkPlugins/group/calico/HairPin 0.15
317 TestNetworkPlugins/group/custom-flannel/DNS 0.18
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
322 TestNetworkPlugins/group/bridge/Start 66.3
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
327 TestStartStop/group/old-k8s-version/serial/FirstStart 185.57
329 TestStartStop/group/no-preload/serial/FirstStart 103.52
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
332 TestNetworkPlugins/group/flannel/NetCatPod 9.24
333 TestNetworkPlugins/group/flannel/DNS 0.15
334 TestNetworkPlugins/group/flannel/Localhost 0.11
335 TestNetworkPlugins/group/flannel/HairPin 0.11
337 TestStartStop/group/embed-certs/serial/FirstStart 81.5
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
339 TestNetworkPlugins/group/bridge/NetCatPod 12.26
340 TestNetworkPlugins/group/bridge/DNS 0.18
341 TestNetworkPlugins/group/bridge/Localhost 0.12
342 TestNetworkPlugins/group/bridge/HairPin 0.13
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.27
345 TestStartStop/group/no-preload/serial/DeployApp 10.31
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
347 TestStartStop/group/no-preload/serial/Stop 91
348 TestStartStop/group/embed-certs/serial/DeployApp 9.26
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
350 TestStartStop/group/embed-certs/serial/Stop 91.01
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.48
354 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.99
356 TestStartStop/group/old-k8s-version/serial/Stop 91.17
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
364 TestStartStop/group/old-k8s-version/serial/SecondStart 165.68
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
368 TestStartStop/group/old-k8s-version/serial/Pause 2.48
370 TestStartStop/group/newest-cni/serial/FirstStart 46.56
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
373 TestStartStop/group/newest-cni/serial/Stop 6.6
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
375 TestStartStop/group/newest-cni/serial/SecondStart 37.53
376 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
377 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
379 TestStartStop/group/newest-cni/serial/Pause 2.55
x
+
TestDownloadOnly/v1.20.0/json-events (24.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-669287 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-669287 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (24.468456601s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 11:20:25.910643  478387 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 11:20:25.910783  478387 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-669287
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-669287: exit status 85 (60.810692ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-669287 | jenkins | v1.35.0 | 27 Jan 25 11:20 UTC |          |
	|         | -p download-only-669287        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:20:01
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:20:01.487384  478399 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:20:01.487503  478399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:20:01.487508  478399 out.go:358] Setting ErrFile to fd 2...
	I0127 11:20:01.487513  478399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:20:01.487687  478399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	W0127 11:20:01.487815  478399 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20318-471120/.minikube/config/config.json: open /home/jenkins/minikube-integration/20318-471120/.minikube/config/config.json: no such file or directory
	I0127 11:20:01.488394  478399 out.go:352] Setting JSON to true
	I0127 11:20:01.489390  478399 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7344,"bootTime":1737969457,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:20:01.489502  478399 start.go:139] virtualization: kvm guest
	I0127 11:20:01.491955  478399 out.go:97] [download-only-669287] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0127 11:20:01.492091  478399 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 11:20:01.492140  478399 notify.go:220] Checking for updates...
	I0127 11:20:01.493497  478399 out.go:169] MINIKUBE_LOCATION=20318
	I0127 11:20:01.494882  478399 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:20:01.496425  478399 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 11:20:01.497810  478399 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 11:20:01.499226  478399 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 11:20:01.501783  478399 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 11:20:01.502189  478399 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:20:01.535873  478399 out.go:97] Using the kvm2 driver based on user configuration
	I0127 11:20:01.535907  478399 start.go:297] selected driver: kvm2
	I0127 11:20:01.535914  478399 start.go:901] validating driver "kvm2" against <nil>
	I0127 11:20:01.536317  478399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:20:01.536423  478399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-471120/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:20:01.552453  478399 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:20:01.552502  478399 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:20:01.553081  478399 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 11:20:01.553236  478399 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 11:20:01.553273  478399 cni.go:84] Creating CNI manager for ""
	I0127 11:20:01.553323  478399 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 11:20:01.553335  478399 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 11:20:01.553399  478399 start.go:340] cluster config:
	{Name:download-only-669287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-669287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:20:01.553583  478399 iso.go:125] acquiring lock: {Name:mkc6ca3cbb5528e67f6dc9da0188f358e9fee620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:20:01.556319  478399 out.go:97] Downloading VM boot image ...
	I0127 11:20:01.556387  478399 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 11:20:11.503561  478399 out.go:97] Starting "download-only-669287" primary control-plane node in "download-only-669287" cluster
	I0127 11:20:11.503598  478399 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 11:20:11.602185  478399 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0127 11:20:11.602237  478399 cache.go:56] Caching tarball of preloaded images
	I0127 11:20:11.602422  478399 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 11:20:11.604388  478399 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 11:20:11.604408  478399 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0127 11:20:11.707169  478399 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-669287 host does not exist
	  To start a cluster, run: "minikube start -p download-only-669287"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-669287
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (12.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-440620 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-440620 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (12.471318119s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (12.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 11:20:38.709228  478387 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 11:20:38.709282  478387 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-440620
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-440620: exit status 85 (61.217535ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-669287 | jenkins | v1.35.0 | 27 Jan 25 11:20 UTC |                     |
	|         | -p download-only-669287        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	| delete  | -p download-only-669287        | download-only-669287 | jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	| start   | -o=json --download-only        | download-only-440620 | jenkins | v1.35.0 | 27 Jan 25 11:20 UTC |                     |
	|         | -p download-only-440620        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:20:26
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:20:26.279387  478651 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:20:26.279485  478651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:20:26.279497  478651 out.go:358] Setting ErrFile to fd 2...
	I0127 11:20:26.279504  478651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:20:26.279673  478651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 11:20:26.280243  478651 out.go:352] Setting JSON to true
	I0127 11:20:26.281224  478651 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7369,"bootTime":1737969457,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:20:26.281333  478651 start.go:139] virtualization: kvm guest
	I0127 11:20:26.283382  478651 out.go:97] [download-only-440620] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:20:26.283554  478651 notify.go:220] Checking for updates...
	I0127 11:20:26.284946  478651 out.go:169] MINIKUBE_LOCATION=20318
	I0127 11:20:26.286391  478651 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:20:26.287591  478651 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 11:20:26.288774  478651 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 11:20:26.289925  478651 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 11:20:26.292055  478651 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 11:20:26.292325  478651 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:20:26.325837  478651 out.go:97] Using the kvm2 driver based on user configuration
	I0127 11:20:26.325864  478651 start.go:297] selected driver: kvm2
	I0127 11:20:26.325870  478651 start.go:901] validating driver "kvm2" against <nil>
	I0127 11:20:26.326227  478651 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:20:26.326322  478651 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-471120/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:20:26.340883  478651 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:20:26.340931  478651 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:20:26.341480  478651 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 11:20:26.341654  478651 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 11:20:26.341686  478651 cni.go:84] Creating CNI manager for ""
	I0127 11:20:26.341749  478651 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 11:20:26.341761  478651 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 11:20:26.341825  478651 start.go:340] cluster config:
	{Name:download-only-440620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-440620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:20:26.341928  478651 iso.go:125] acquiring lock: {Name:mkc6ca3cbb5528e67f6dc9da0188f358e9fee620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:20:26.343526  478651 out.go:97] Starting "download-only-440620" primary control-plane node in "download-only-440620" cluster
	I0127 11:20:26.343646  478651 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:20:27.239707  478651 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0127 11:20:27.239764  478651 cache.go:56] Caching tarball of preloaded images
	I0127 11:20:27.239945  478651 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:20:27.241702  478651 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 11:20:27.241725  478651 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 ...
	I0127 11:20:27.339743  478651 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:8f020f9a34bd60feec225b8429b992a8 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-440620 host does not exist
	  To start a cluster, run: "minikube start -p download-only-440620"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-440620
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 11:20:39.295511  478387 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-915611 --alsologtostderr --binary-mirror http://127.0.0.1:36901 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-915611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-915611
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (83.56s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-204211 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-204211 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m22.257806776s)
helpers_test.go:175: Cleaning up "offline-containerd-204211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-204211
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-204211: (1.297974616s)
--- PASS: TestOffline (83.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-582557
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-582557: exit status 85 (53.752339ms)

                                                
                                                
-- stdout --
	* Profile "addons-582557" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-582557"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-582557
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-582557: exit status 85 (54.496893ms)

                                                
                                                
-- stdout --
	* Profile "addons-582557" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-582557"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (210.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-582557 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-582557 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m30.952906185s)
--- PASS: TestAddons/Setup (210.95s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.75s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 23.570805ms
addons_test.go:815: volcano-admission stabilized in 23.64763ms
addons_test.go:823: volcano-controller stabilized in 23.677754ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-vc2tm" [a5bd03c5-7165-4958-b40a-0e29be9b76c9] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003615834s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-btqf2" [12b6a7df-4fb5-4115-8ece-588389ae0cc1] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004154504s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-488tt" [095ec23b-a3a2-4888-9ad3-448a7ea6719b] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003412389s
addons_test.go:842: (dbg) Run:  kubectl --context addons-582557 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-582557 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-582557 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a36e903c-6a69-4076-97f1-008fce98721e] Pending
helpers_test.go:344: "test-job-nginx-0" [a36e903c-6a69-4076-97f1-008fce98721e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a36e903c-6a69-4076-97f1-008fce98721e] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003683681s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-582557 addons disable volcano --alsologtostderr -v=1: (11.361972302s)
--- PASS: TestAddons/serial/Volcano (41.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-582557 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-582557 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-582557 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-582557 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1cd727e1-e2e4-42a1-b4e8-95644c13b7a3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1cd727e1-e2e4-42a1-b4e8-95644c13b7a3] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004092868s
addons_test.go:633: (dbg) Run:  kubectl --context addons-582557 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-582557 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-582557 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.891151ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-bp69x" [0f236466-4253-4fab-9535-df2236049213] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003110984s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xj6tn" [354af10d-c66b-441c-9ac3-610afa1052d9] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003011519s
addons_test.go:331: (dbg) Run:  kubectl --context addons-582557 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-582557 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-582557 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.547495487s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 ip
2025/01/27 11:25:33 [DEBUG] GET http://192.168.39.113:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.33s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-582557 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-582557 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-582557 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [083e8cec-8f46-488d-9e57-82dbb4675426] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [083e8cec-8f46-488d-9e57-82dbb4675426] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003304101s
I0127 11:25:45.082604  478387 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-582557 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.113
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-582557 addons disable ingress-dns --alsologtostderr -v=1: (1.000543733s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-582557 addons disable ingress --alsologtostderr -v=1: (7.717744951s)
--- PASS: TestAddons/parallel/Ingress (19.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rfbv5" [ab6ba901-1704-47a3-8d3e-06a6ac5c4ada] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004045667s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-582557 addons disable inspektor-gadget --alsologtostderr -v=1: (5.8064754s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.290996ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-7gpsx" [c29f268d-09a5-47d8-bba4-1af0ed9057ff] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003493029s
addons_test.go:402: (dbg) Run:  kubectl --context addons-582557 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 11:25:31.077695  478387 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 11:25:31.082558  478387 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 11:25:31.082585  478387 kapi.go:107] duration metric: took 4.911128ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.920099ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-582557 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-582557 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ed29913c-1f3d-4032-aeaa-855f5cacb4b2] Pending
helpers_test.go:344: "task-pv-pod" [ed29913c-1f3d-4032-aeaa-855f5cacb4b2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ed29913c-1f3d-4032-aeaa-855f5cacb4b2] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003664055s
addons_test.go:511: (dbg) Run:  kubectl --context addons-582557 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-582557 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-582557 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-582557 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-582557 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-582557 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-582557 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e2e1d50f-f85a-4b78-b824-f53601212084] Pending
helpers_test.go:344: "task-pv-pod-restore" [e2e1d50f-f85a-4b78-b824-f53601212084] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e2e1d50f-f85a-4b78-b824-f53601212084] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0033536s
addons_test.go:553: (dbg) Run:  kubectl --context addons-582557 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-582557 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-582557 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-582557 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.707462626s)
--- PASS: TestAddons/parallel/CSI (58.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (28.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-582557 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-582557 --alsologtostderr -v=1: (1.095959532s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-9hfcp" [1acf58e1-5262-4df5-9830-d39206e7163c] Pending
helpers_test.go:344: "headlamp-69d78d796f-9hfcp" [1acf58e1-5262-4df5-9830-d39206e7163c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-9hfcp" [1acf58e1-5262-4df5-9830-d39206e7163c] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.004036257s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-582557 addons disable headlamp --alsologtostderr -v=1: (5.943892124s)
--- PASS: TestAddons/parallel/Headlamp (28.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-5svh2" [e38e4385-231b-49a5-bc4b-fd1608ef8ff0] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004902681s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (21.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-582557 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-582557 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582557 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [025bc9d1-7a68-44ee-a974-233c212baf6b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [025bc9d1-7a68-44ee-a974-233c212baf6b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [025bc9d1-7a68-44ee-a974-233c212baf6b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 14.004149889s
addons_test.go:906: (dbg) Run:  kubectl --context addons-582557 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 ssh "cat /opt/local-path-provisioner/pvc-a1b48926-5cbf-4e93-b090-f8cd7a3392a8_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-582557 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-582557 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (21.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8jf4x" [9751fa5e-c34b-4a81-b67c-77f7eb07ffc7] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003217791s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-8td4x" [fbdbbeda-4ffc-4789-a206-964da26539ad] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004042126s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-582557 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-582557 addons disable yakd --alsologtostderr -v=1: (5.788495362s)
--- PASS: TestAddons/parallel/Yakd (10.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-582557
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-582557: (1m30.958336241s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-582557
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-582557
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-582557
--- PASS: TestAddons/StoppedEnableDisable (91.24s)

                                                
                                    
x
+
TestCertOptions (53.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-926484 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-926484 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (52.180814489s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-926484 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-926484 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-926484 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-926484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-926484
--- PASS: TestCertOptions (53.46s)

                                                
                                    
x
+
TestCertExpiration (324.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-455827 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-455827 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m22.079945169s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-455827 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-455827 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (1m1.264336003s)
helpers_test.go:175: Cleaning up "cert-expiration-455827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-455827
--- PASS: TestCertExpiration (324.16s)

                                                
                                    
x
+
TestForceSystemdFlag (94.81s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-497653 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-497653 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m33.62974584s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-497653 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-497653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-497653
--- PASS: TestForceSystemdFlag (94.81s)

                                                
                                    
x
+
TestForceSystemdEnv (44.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-288335 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-288335 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (43.121156142s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-288335 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-288335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-288335
--- PASS: TestForceSystemdEnv (44.14s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.25s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 12:22:58.337297  478387 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:22:58.337467  478387 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 12:22:58.366260  478387 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 12:22:58.366632  478387 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 12:22:58.366686  478387 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1752721257/001/docker-machine-driver-kvm2
I0127 12:22:58.562172  478387 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1752721257/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0] Decompressors:map[bz2:0xc00072a020 gz:0xc00072a028 tar:0xc0004b3fb0 tar.bz2:0xc0004b3fc0 tar.gz:0xc0004b3fd0 tar.xz:0xc0004b3fe0 tar.zst:0xc0004b3ff0 tbz2:0xc0004b3fc0 tgz:0xc0004b3fd0 txz:0xc0004b3fe0 tzst:0xc0004b3ff0 xz:0xc00072a030 zip:0xc00072a040 zst:0xc00072a038] Getters:map[file:0xc000b74590 http:0xc000075860 https:0xc0000758b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 12:22:58.562217  478387 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1752721257/001/docker-machine-driver-kvm2
I0127 12:23:00.562626  478387 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:23:00.562711  478387 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 12:23:00.592179  478387 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 12:23:00.592208  478387 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 12:23:00.592279  478387 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 12:23:00.592302  478387 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1752721257/002/docker-machine-driver-kvm2
I0127 12:23:00.623360  478387 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1752721257/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0 0x530c6a0] Decompressors:map[bz2:0xc00072a020 gz:0xc00072a028 tar:0xc0004b3fb0 tar.bz2:0xc0004b3fc0 tar.gz:0xc0004b3fd0 tar.xz:0xc0004b3fe0 tar.zst:0xc0004b3ff0 tbz2:0xc0004b3fc0 tgz:0xc0004b3fd0 txz:0xc0004b3fe0 tzst:0xc0004b3ff0 xz:0xc00072a030 zip:0xc00072a040 zst:0xc00072a038] Getters:map[file:0xc00136a410 http:0xc001c984b0 https:0xc001c98500] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 12:23:00.623397  478387 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1752721257/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.25s)

                                                
                                    
x
+
TestErrorSpam/setup (38.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-671144 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-671144 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-671144 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-671144 --driver=kvm2  --container-runtime=containerd: (38.08503459s)
--- PASS: TestErrorSpam/setup (38.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (5.12s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 stop: (1.370933113s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 stop: (1.91320032s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-671144 --log_dir /tmp/nospam-671144 stop: (1.839303955s)
--- PASS: TestErrorSpam/stop (5.12s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/test/nested/copy/478387/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-508115 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0127 11:29:10.933640  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:10.940009  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:10.951316  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:10.972634  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:11.013992  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:11.095434  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:11.257022  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:11.578761  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:12.220867  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:13.502586  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:16.064704  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:21.186068  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:31.428040  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-508115 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (52.718445912s)
--- PASS: TestFunctional/serial/StartWithProxy (52.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 11:29:41.841082  478387 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-508115 --alsologtostderr -v=8
E0127 11:29:51.909819  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-508115 --alsologtostderr -v=8: (43.056474419s)
functional_test.go:663: soft start took 43.057325088s for "functional-508115" cluster.
I0127 11:30:24.897933  478387 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (43.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-508115 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-508115 cache add registry.k8s.io/pause:3.1: (1.522424561s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-508115 cache add registry.k8s.io/pause:3.3: (1.52398313s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-508115 cache add registry.k8s.io/pause:latest: (1.515662398s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-508115 /tmp/TestFunctionalserialCacheCmdcacheadd_local2038355364/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 cache add minikube-local-cache-test:functional-508115
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-508115 cache add minikube-local-cache-test:functional-508115: (2.025785887s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 cache delete minikube-local-cache-test:functional-508115
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-508115
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-508115 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.140265ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 cache reload
E0127 11:30:32.872050  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-508115 cache reload: (1.250818958s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 kubectl -- --context functional-508115 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-508115 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-508115 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-508115 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.491866986s)
functional_test.go:761: restart took 41.491998348s for "functional-508115" cluster.
I0127 11:31:15.918174  478387 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (41.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-508115 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-508115 logs: (1.224149661s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 logs --file /tmp/TestFunctionalserialLogsFileCmd3074830992/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-508115 logs --file /tmp/TestFunctionalserialLogsFileCmd3074830992/001/logs.txt: (1.267717988s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-508115 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-508115
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-508115: exit status 115 (265.684846ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.167:31826 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-508115 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-508115 config get cpus: exit status 14 (63.95437ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-508115 config get cpus: exit status 14 (54.776414ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-508115 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-508115 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 486686: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-508115 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-508115 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (175.882467ms)

                                                
                                                
-- stdout --
	* [functional-508115] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:31:34.403172  486095 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:31:34.404861  486095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:31:34.404884  486095 out.go:358] Setting ErrFile to fd 2...
	I0127 11:31:34.404895  486095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:31:34.405486  486095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 11:31:34.406388  486095 out.go:352] Setting JSON to false
	I0127 11:31:34.407899  486095 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8037,"bootTime":1737969457,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:31:34.408037  486095 start.go:139] virtualization: kvm guest
	I0127 11:31:34.409828  486095 out.go:177] * [functional-508115] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:31:34.411301  486095 notify.go:220] Checking for updates...
	I0127 11:31:34.411340  486095 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:31:34.412620  486095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:31:34.413754  486095 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 11:31:34.414816  486095 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 11:31:34.415963  486095 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:31:34.425672  486095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:31:34.427305  486095 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:31:34.427895  486095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:31:34.427959  486095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:31:34.449474  486095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I0127 11:31:34.449989  486095 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:31:34.450556  486095 main.go:141] libmachine: Using API Version  1
	I0127 11:31:34.450581  486095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:31:34.451028  486095 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:31:34.451251  486095 main.go:141] libmachine: (functional-508115) Calling .DriverName
	I0127 11:31:34.451578  486095 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:31:34.452039  486095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:31:34.452095  486095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:31:34.467825  486095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0127 11:31:34.468341  486095 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:31:34.469007  486095 main.go:141] libmachine: Using API Version  1
	I0127 11:31:34.469037  486095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:31:34.469324  486095 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:31:34.469548  486095 main.go:141] libmachine: (functional-508115) Calling .DriverName
	I0127 11:31:34.503169  486095 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:31:34.504311  486095 start.go:297] selected driver: kvm2
	I0127 11:31:34.504334  486095 start.go:901] validating driver "kvm2" against &{Name:functional-508115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-508115 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:31:34.504484  486095 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:31:34.508874  486095 out.go:201] 
	W0127 11:31:34.510164  486095 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 11:31:34.511241  486095 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-508115 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-508115 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-508115 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (146.728059ms)

                                                
                                                
-- stdout --
	* [functional-508115] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:31:34.240961  486060 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:31:34.241241  486060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:31:34.241252  486060 out.go:358] Setting ErrFile to fd 2...
	I0127 11:31:34.241257  486060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:31:34.241577  486060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 11:31:34.242115  486060 out.go:352] Setting JSON to false
	I0127 11:31:34.243113  486060 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8037,"bootTime":1737969457,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:31:34.243215  486060 start.go:139] virtualization: kvm guest
	I0127 11:31:34.245188  486060 out.go:177] * [functional-508115] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 11:31:34.246300  486060 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:31:34.246331  486060 notify.go:220] Checking for updates...
	I0127 11:31:34.248530  486060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:31:34.249548  486060 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 11:31:34.250634  486060 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 11:31:34.251659  486060 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:31:34.252764  486060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:31:34.254197  486060 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:31:34.254608  486060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:31:34.254678  486060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:31:34.270220  486060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41641
	I0127 11:31:34.270711  486060 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:31:34.271436  486060 main.go:141] libmachine: Using API Version  1
	I0127 11:31:34.271478  486060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:31:34.271826  486060 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:31:34.272080  486060 main.go:141] libmachine: (functional-508115) Calling .DriverName
	I0127 11:31:34.272337  486060 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:31:34.272777  486060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:31:34.272846  486060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:31:34.288325  486060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0127 11:31:34.288761  486060 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:31:34.289268  486060 main.go:141] libmachine: Using API Version  1
	I0127 11:31:34.289288  486060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:31:34.289663  486060 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:31:34.289865  486060 main.go:141] libmachine: (functional-508115) Calling .DriverName
	I0127 11:31:34.328396  486060 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 11:31:34.329758  486060 start.go:297] selected driver: kvm2
	I0127 11:31:34.329778  486060 start.go:901] validating driver "kvm2" against &{Name:functional-508115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-508115 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:31:34.329921  486060 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:31:34.332173  486060 out.go:201] 
	W0127 11:31:34.333244  486060 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 11:31:34.334334  486060 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-508115 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-508115 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-v6pkm" [d1bdc901-ffb7-42e2-b07a-5ab6472393ed] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-v6pkm" [d1bdc901-ffb7-42e2-b07a-5ab6472393ed] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003523331s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.167:31684
functional_test.go:1675: http://192.168.39.167:31684: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-v6pkm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.167:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.167:31684
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0d9d3cf0-5556-45aa-8f23-47f011400f8d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004195333s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-508115 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-508115 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-508115 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-508115 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [edc3633a-f5fc-4101-9fcb-f42261195f03] Pending
helpers_test.go:344: "sp-pod" [edc3633a-f5fc-4101-9fcb-f42261195f03] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [edc3633a-f5fc-4101-9fcb-f42261195f03] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004536324s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-508115 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-508115 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-508115 delete -f testdata/storage-provisioner/pod.yaml: (1.456786467s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-508115 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9c9bc00f-364c-4ebd-9c01-012cfff763b9] Pending
helpers_test.go:344: "sp-pod" [9c9bc00f-364c-4ebd-9c01-012cfff763b9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9c9bc00f-364c-4ebd-9c01-012cfff763b9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003018816s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-508115 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh -n functional-508115 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 cp functional-508115:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3570791170/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh -n functional-508115 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh -n functional-508115 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-508115 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-8xfjz" [2f4ac797-fa63-40c4-9ca1-795ed1a1ae88] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-8xfjz" [2f4ac797-fa63-40c4-9ca1-795ed1a1ae88] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.015447139s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-508115 exec mysql-58ccfd96bb-8xfjz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-508115 exec mysql-58ccfd96bb-8xfjz -- mysql -ppassword -e "show databases;": exit status 1 (207.054568ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 11:31:54.462409  478387 retry.go:31] will retry after 1.0618359s: exit status 1
E0127 11:31:54.793691  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1807: (dbg) Run:  kubectl --context functional-508115 exec mysql-58ccfd96bb-8xfjz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-508115 exec mysql-58ccfd96bb-8xfjz -- mysql -ppassword -e "show databases;": exit status 1 (201.875708ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 11:31:55.727099  478387 retry.go:31] will retry after 1.699204623s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-508115 exec mysql-58ccfd96bb-8xfjz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-508115 exec mysql-58ccfd96bb-8xfjz -- mysql -ppassword -e "show databases;": exit status 1 (150.507081ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 11:31:57.577869  478387 retry.go:31] will retry after 1.923223026s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-508115 exec mysql-58ccfd96bb-8xfjz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-508115 exec mysql-58ccfd96bb-8xfjz -- mysql -ppassword -e "show databases;": exit status 1 (107.196327ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 11:31:59.609486  478387 retry.go:31] will retry after 2.481397282s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-508115 exec mysql-58ccfd96bb-8xfjz -- mysql -ppassword -e "show databases;"
2025/01/27 11:32:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (26.18s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/478387/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo cat /etc/test/nested/copy/478387/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/478387.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo cat /etc/ssl/certs/478387.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/478387.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo cat /usr/share/ca-certificates/478387.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/4783872.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo cat /etc/ssl/certs/4783872.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/4783872.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo cat /usr/share/ca-certificates/4783872.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-508115 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-508115 ssh "sudo systemctl is-active docker": exit status 1 (254.145539ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-508115 ssh "sudo systemctl is-active crio": exit status 1 (251.696274ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.1714726s)
--- PASS: TestFunctional/parallel/License (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-508115 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-508115 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-gsrmm" [2723acb4-5a5a-4586-8a44-172305c82a20] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-gsrmm" [2723acb4-5a5a-4586-8a44-172305c82a20] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005374385s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "343.700218ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "49.869063ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "273.15561ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.546753ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdany-port791520917/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737977484991218147" to /tmp/TestFunctionalparallelMountCmdany-port791520917/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737977484991218147" to /tmp/TestFunctionalparallelMountCmdany-port791520917/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737977484991218147" to /tmp/TestFunctionalparallelMountCmdany-port791520917/001/test-1737977484991218147
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-508115 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (226.516359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 11:31:25.218085  478387 retry.go:31] will retry after 707.110742ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 11:31 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 11:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 11:31 test-1737977484991218147
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh cat /mount-9p/test-1737977484991218147
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-508115 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6959bac6-334a-4d76-a86e-63597f3bafda] Pending
helpers_test.go:344: "busybox-mount" [6959bac6-334a-4d76-a86e-63597f3bafda] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6959bac6-334a-4d76-a86e-63597f3bafda] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6959bac6-334a-4d76-a86e-63597f3bafda] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.002980108s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-508115 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdany-port791520917/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdspecific-port493781001/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-508115 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.781499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 11:31:33.937391  478387 retry.go:31] will retry after 401.230439ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdspecific-port493781001/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-508115 ssh "sudo umount -f /mount-9p": exit status 1 (320.094417ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-508115 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdspecific-port493781001/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 service list -o json
functional_test.go:1494: Took "305.17505ms" to run "out/minikube-linux-amd64 -p functional-508115 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.167:32356
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.167:32356
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3494006469/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3494006469/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3494006469/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-508115 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3494006469/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3494006469/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-508115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3494006469/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-508115 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-508115
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-508115
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-508115 image ls --format short --alsologtostderr:
I0127 11:31:46.770040  487221 out.go:345] Setting OutFile to fd 1 ...
I0127 11:31:46.770138  487221 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:31:46.770146  487221 out.go:358] Setting ErrFile to fd 2...
I0127 11:31:46.770150  487221 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:31:46.770332  487221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
I0127 11:31:46.770919  487221 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:31:46.771023  487221 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:31:46.771380  487221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:31:46.771457  487221 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:31:46.786840  487221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
I0127 11:31:46.787351  487221 main.go:141] libmachine: () Calling .GetVersion
I0127 11:31:46.787965  487221 main.go:141] libmachine: Using API Version  1
I0127 11:31:46.787989  487221 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:31:46.788387  487221 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:31:46.788599  487221 main.go:141] libmachine: (functional-508115) Calling .GetState
I0127 11:31:46.790651  487221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:31:46.790696  487221 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:31:46.805603  487221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44407
I0127 11:31:46.805986  487221 main.go:141] libmachine: () Calling .GetVersion
I0127 11:31:46.806487  487221 main.go:141] libmachine: Using API Version  1
I0127 11:31:46.806517  487221 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:31:46.806841  487221 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:31:46.807104  487221 main.go:141] libmachine: (functional-508115) Calling .DriverName
I0127 11:31:46.807317  487221 ssh_runner.go:195] Run: systemctl --version
I0127 11:31:46.807366  487221 main.go:141] libmachine: (functional-508115) Calling .GetSSHHostname
I0127 11:31:46.810241  487221 main.go:141] libmachine: (functional-508115) DBG | domain functional-508115 has defined MAC address 52:54:00:47:66:e2 in network mk-functional-508115
I0127 11:31:46.810669  487221 main.go:141] libmachine: (functional-508115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:66:e2", ip: ""} in network mk-functional-508115: {Iface:virbr1 ExpiryTime:2025-01-27 12:29:03 +0000 UTC Type:0 Mac:52:54:00:47:66:e2 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:functional-508115 Clientid:01:52:54:00:47:66:e2}
I0127 11:31:46.810701  487221 main.go:141] libmachine: (functional-508115) DBG | domain functional-508115 has defined IP address 192.168.39.167 and MAC address 52:54:00:47:66:e2 in network mk-functional-508115
I0127 11:31:46.810836  487221 main.go:141] libmachine: (functional-508115) Calling .GetSSHPort
I0127 11:31:46.811033  487221 main.go:141] libmachine: (functional-508115) Calling .GetSSHKeyPath
I0127 11:31:46.811192  487221 main.go:141] libmachine: (functional-508115) Calling .GetSSHUsername
I0127 11:31:46.811348  487221 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/functional-508115/id_rsa Username:docker}
I0127 11:31:46.893392  487221 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:31:46.933168  487221 main.go:141] libmachine: Making call to close driver server
I0127 11:31:46.933188  487221 main.go:141] libmachine: (functional-508115) Calling .Close
I0127 11:31:46.933531  487221 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:31:46.933557  487221 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:31:46.933559  487221 main.go:141] libmachine: (functional-508115) DBG | Closing plugin on server side
I0127 11:31:46.933568  487221 main.go:141] libmachine: Making call to close driver server
I0127 11:31:46.933577  487221 main.go:141] libmachine: (functional-508115) Calling .Close
I0127 11:31:46.933826  487221 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:31:46.933845  487221 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-508115 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-508115  | sha256:9056ab | 2.37MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| localhost/my-image                          | functional-508115  | sha256:9619a6 | 775kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:019ee1 | 26.3MB |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e29f9c | 30.9MB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| docker.io/library/minikube-local-cache-test | functional-508115  | sha256:fc28a9 | 991B   |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| docker.io/library/nginx                     | latest             | sha256:9bea9f | 72.1MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:95c0bd | 28.7MB |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:2b0d65 | 20.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-508115 image ls --format table --alsologtostderr:
I0127 11:31:52.300078  487389 out.go:345] Setting OutFile to fd 1 ...
I0127 11:31:52.300184  487389 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:31:52.300192  487389 out.go:358] Setting ErrFile to fd 2...
I0127 11:31:52.300196  487389 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:31:52.300378  487389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
I0127 11:31:52.300989  487389 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:31:52.301086  487389 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:31:52.301454  487389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:31:52.301525  487389 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:31:52.316433  487389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
I0127 11:31:52.316940  487389 main.go:141] libmachine: () Calling .GetVersion
I0127 11:31:52.317553  487389 main.go:141] libmachine: Using API Version  1
I0127 11:31:52.317580  487389 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:31:52.317959  487389 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:31:52.318196  487389 main.go:141] libmachine: (functional-508115) Calling .GetState
I0127 11:31:52.320206  487389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:31:52.320245  487389 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:31:52.334548  487389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
I0127 11:31:52.335021  487389 main.go:141] libmachine: () Calling .GetVersion
I0127 11:31:52.335591  487389 main.go:141] libmachine: Using API Version  1
I0127 11:31:52.335625  487389 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:31:52.335974  487389 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:31:52.336176  487389 main.go:141] libmachine: (functional-508115) Calling .DriverName
I0127 11:31:52.336401  487389 ssh_runner.go:195] Run: systemctl --version
I0127 11:31:52.336431  487389 main.go:141] libmachine: (functional-508115) Calling .GetSSHHostname
I0127 11:31:52.339230  487389 main.go:141] libmachine: (functional-508115) DBG | domain functional-508115 has defined MAC address 52:54:00:47:66:e2 in network mk-functional-508115
I0127 11:31:52.339567  487389 main.go:141] libmachine: (functional-508115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:66:e2", ip: ""} in network mk-functional-508115: {Iface:virbr1 ExpiryTime:2025-01-27 12:29:03 +0000 UTC Type:0 Mac:52:54:00:47:66:e2 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:functional-508115 Clientid:01:52:54:00:47:66:e2}
I0127 11:31:52.339593  487389 main.go:141] libmachine: (functional-508115) DBG | domain functional-508115 has defined IP address 192.168.39.167 and MAC address 52:54:00:47:66:e2 in network mk-functional-508115
I0127 11:31:52.339711  487389 main.go:141] libmachine: (functional-508115) Calling .GetSSHPort
I0127 11:31:52.339888  487389 main.go:141] libmachine: (functional-508115) Calling .GetSSHKeyPath
I0127 11:31:52.340044  487389 main.go:141] libmachine: (functional-508115) Calling .GetSSHUsername
I0127 11:31:52.340193  487389 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/functional-508115/id_rsa Username:docker}
I0127 11:31:52.423578  487389 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:31:52.465742  487389 main.go:141] libmachine: Making call to close driver server
I0127 11:31:52.465765  487389 main.go:141] libmachine: (functional-508115) Calling .Close
I0127 11:31:52.466052  487389 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:31:52.466067  487389 main.go:141] libmachine: (functional-508115) DBG | Closing plugin on server side
I0127 11:31:52.466084  487389 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:31:52.466094  487389 main.go:141] libmachine: Making call to close driver server
I0127 11:31:52.466117  487389 main.go:141] libmachine: (functional-508115) Calling .Close
I0127 11:31:52.466415  487389 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:31:52.466436  487389 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:31:52.466441  487389 main.go:141] libmachine: (functional-508115) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-508115 image ls --format json --alsologtostderr:
[{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"26258470"},{"id":"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7a
b1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"72080558"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c
448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"28671624"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:9619a6a8e8e36164b6c5add4d6bef9cb84560e4bf7317f111d461d4d9df5e03b","repoDigests":[],"repoTags":["localhost/my-image:functional-508115"],"size":"774888"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repo
Tags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"30908485"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:fc28a93011f34e2b25faa9c21d877941c61a31e021572d6c89eeb104ee5e78b6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-508115"],"size":"991"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"r
epoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"20657536"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-508115"],"size":"2372971"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-508115 image ls --format json --alsologtostderr:
I0127 11:31:52.084825  487365 out.go:345] Setting OutFile to fd 1 ...
I0127 11:31:52.084919  487365 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:31:52.084927  487365 out.go:358] Setting ErrFile to fd 2...
I0127 11:31:52.084931  487365 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:31:52.085159  487365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
I0127 11:31:52.085948  487365 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:31:52.086054  487365 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:31:52.086458  487365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:31:52.086505  487365 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:31:52.101832  487365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
I0127 11:31:52.102419  487365 main.go:141] libmachine: () Calling .GetVersion
I0127 11:31:52.103007  487365 main.go:141] libmachine: Using API Version  1
I0127 11:31:52.103027  487365 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:31:52.103373  487365 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:31:52.103595  487365 main.go:141] libmachine: (functional-508115) Calling .GetState
I0127 11:31:52.105625  487365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:31:52.105664  487365 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:31:52.121474  487365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
I0127 11:31:52.121951  487365 main.go:141] libmachine: () Calling .GetVersion
I0127 11:31:52.122648  487365 main.go:141] libmachine: Using API Version  1
I0127 11:31:52.122677  487365 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:31:52.123024  487365 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:31:52.123211  487365 main.go:141] libmachine: (functional-508115) Calling .DriverName
I0127 11:31:52.123424  487365 ssh_runner.go:195] Run: systemctl --version
I0127 11:31:52.123458  487365 main.go:141] libmachine: (functional-508115) Calling .GetSSHHostname
I0127 11:31:52.126491  487365 main.go:141] libmachine: (functional-508115) DBG | domain functional-508115 has defined MAC address 52:54:00:47:66:e2 in network mk-functional-508115
I0127 11:31:52.127027  487365 main.go:141] libmachine: (functional-508115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:66:e2", ip: ""} in network mk-functional-508115: {Iface:virbr1 ExpiryTime:2025-01-27 12:29:03 +0000 UTC Type:0 Mac:52:54:00:47:66:e2 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:functional-508115 Clientid:01:52:54:00:47:66:e2}
I0127 11:31:52.127057  487365 main.go:141] libmachine: (functional-508115) DBG | domain functional-508115 has defined IP address 192.168.39.167 and MAC address 52:54:00:47:66:e2 in network mk-functional-508115
I0127 11:31:52.127219  487365 main.go:141] libmachine: (functional-508115) Calling .GetSSHPort
I0127 11:31:52.127419  487365 main.go:141] libmachine: (functional-508115) Calling .GetSSHKeyPath
I0127 11:31:52.127621  487365 main.go:141] libmachine: (functional-508115) Calling .GetSSHUsername
I0127 11:31:52.127774  487365 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/functional-508115/id_rsa Username:docker}
I0127 11:31:52.202758  487365 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:31:52.241977  487365 main.go:141] libmachine: Making call to close driver server
I0127 11:31:52.241990  487365 main.go:141] libmachine: (functional-508115) Calling .Close
I0127 11:31:52.242381  487365 main.go:141] libmachine: (functional-508115) DBG | Closing plugin on server side
I0127 11:31:52.242389  487365 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:31:52.242418  487365 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:31:52.242432  487365 main.go:141] libmachine: Making call to close driver server
I0127 11:31:52.242442  487365 main.go:141] libmachine: (functional-508115) Calling .Close
I0127 11:31:52.242757  487365 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:31:52.242782  487365 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:31:52.242787  487365 main.go:141] libmachine: (functional-508115) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-508115 image ls --format yaml --alsologtostderr:
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "72080558"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "26258470"
- id: sha256:fc28a93011f34e2b25faa9c21d877941c61a31e021572d6c89eeb104ee5e78b6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-508115
size: "991"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "28671624"
- id: sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "30908485"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-508115
size: "2372971"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "20657536"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-508115 image ls --format yaml --alsologtostderr:
I0127 11:31:46.997411  487245 out.go:345] Setting OutFile to fd 1 ...
I0127 11:31:46.997542  487245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:31:46.997554  487245 out.go:358] Setting ErrFile to fd 2...
I0127 11:31:46.997561  487245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:31:46.997734  487245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
I0127 11:31:46.998332  487245 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:31:46.998441  487245 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:31:46.998882  487245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:31:46.998950  487245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:31:47.013717  487245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
I0127 11:31:47.014158  487245 main.go:141] libmachine: () Calling .GetVersion
I0127 11:31:47.014705  487245 main.go:141] libmachine: Using API Version  1
I0127 11:31:47.014729  487245 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:31:47.015091  487245 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:31:47.015298  487245 main.go:141] libmachine: (functional-508115) Calling .GetState
I0127 11:31:47.017072  487245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:31:47.017112  487245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:31:47.031002  487245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
I0127 11:31:47.031463  487245 main.go:141] libmachine: () Calling .GetVersion
I0127 11:31:47.031941  487245 main.go:141] libmachine: Using API Version  1
I0127 11:31:47.031963  487245 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:31:47.032327  487245 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:31:47.032531  487245 main.go:141] libmachine: (functional-508115) Calling .DriverName
I0127 11:31:47.032766  487245 ssh_runner.go:195] Run: systemctl --version
I0127 11:31:47.032796  487245 main.go:141] libmachine: (functional-508115) Calling .GetSSHHostname
I0127 11:31:47.035473  487245 main.go:141] libmachine: (functional-508115) DBG | domain functional-508115 has defined MAC address 52:54:00:47:66:e2 in network mk-functional-508115
I0127 11:31:47.035859  487245 main.go:141] libmachine: (functional-508115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:66:e2", ip: ""} in network mk-functional-508115: {Iface:virbr1 ExpiryTime:2025-01-27 12:29:03 +0000 UTC Type:0 Mac:52:54:00:47:66:e2 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:functional-508115 Clientid:01:52:54:00:47:66:e2}
I0127 11:31:47.035878  487245 main.go:141] libmachine: (functional-508115) DBG | domain functional-508115 has defined IP address 192.168.39.167 and MAC address 52:54:00:47:66:e2 in network mk-functional-508115
I0127 11:31:47.036054  487245 main.go:141] libmachine: (functional-508115) Calling .GetSSHPort
I0127 11:31:47.036211  487245 main.go:141] libmachine: (functional-508115) Calling .GetSSHKeyPath
I0127 11:31:47.036391  487245 main.go:141] libmachine: (functional-508115) Calling .GetSSHUsername
I0127 11:31:47.036539  487245 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/functional-508115/id_rsa Username:docker}
I0127 11:31:47.112016  487245 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:31:47.147089  487245 main.go:141] libmachine: Making call to close driver server
I0127 11:31:47.147106  487245 main.go:141] libmachine: (functional-508115) Calling .Close
I0127 11:31:47.147376  487245 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:31:47.147393  487245 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:31:47.147433  487245 main.go:141] libmachine: Making call to close driver server
I0127 11:31:47.147441  487245 main.go:141] libmachine: (functional-508115) Calling .Close
I0127 11:31:47.147450  487245 main.go:141] libmachine: (functional-508115) DBG | Closing plugin on server side
I0127 11:31:47.147661  487245 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:31:47.147707  487245 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:31:47.147682  487245 main.go:141] libmachine: (functional-508115) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-508115 ssh pgrep buildkitd: exit status 1 (184.096033ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image build -t localhost/my-image:functional-508115 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-508115 image build -t localhost/my-image:functional-508115 testdata/build --alsologtostderr: (4.466931004s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-508115 image build -t localhost/my-image:functional-508115 testdata/build --alsologtostderr:
I0127 11:31:47.383310  487299 out.go:345] Setting OutFile to fd 1 ...
I0127 11:31:47.383560  487299 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:31:47.383569  487299 out.go:358] Setting ErrFile to fd 2...
I0127 11:31:47.383573  487299 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:31:47.383756  487299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
I0127 11:31:47.384278  487299 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:31:47.384804  487299 config.go:182] Loaded profile config "functional-508115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:31:47.385223  487299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:31:47.385266  487299 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:31:47.400382  487299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42871
I0127 11:31:47.400845  487299 main.go:141] libmachine: () Calling .GetVersion
I0127 11:31:47.401463  487299 main.go:141] libmachine: Using API Version  1
I0127 11:31:47.401824  487299 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:31:47.402235  487299 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:31:47.402591  487299 main.go:141] libmachine: (functional-508115) Calling .GetState
I0127 11:31:47.404396  487299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:31:47.404432  487299 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:31:47.418982  487299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
I0127 11:31:47.419375  487299 main.go:141] libmachine: () Calling .GetVersion
I0127 11:31:47.419842  487299 main.go:141] libmachine: Using API Version  1
I0127 11:31:47.419868  487299 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:31:47.420217  487299 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:31:47.420502  487299 main.go:141] libmachine: (functional-508115) Calling .DriverName
I0127 11:31:47.420660  487299 ssh_runner.go:195] Run: systemctl --version
I0127 11:31:47.420683  487299 main.go:141] libmachine: (functional-508115) Calling .GetSSHHostname
I0127 11:31:47.423621  487299 main.go:141] libmachine: (functional-508115) DBG | domain functional-508115 has defined MAC address 52:54:00:47:66:e2 in network mk-functional-508115
I0127 11:31:47.424042  487299 main.go:141] libmachine: (functional-508115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:66:e2", ip: ""} in network mk-functional-508115: {Iface:virbr1 ExpiryTime:2025-01-27 12:29:03 +0000 UTC Type:0 Mac:52:54:00:47:66:e2 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:functional-508115 Clientid:01:52:54:00:47:66:e2}
I0127 11:31:47.424063  487299 main.go:141] libmachine: (functional-508115) DBG | domain functional-508115 has defined IP address 192.168.39.167 and MAC address 52:54:00:47:66:e2 in network mk-functional-508115
I0127 11:31:47.424236  487299 main.go:141] libmachine: (functional-508115) Calling .GetSSHPort
I0127 11:31:47.424417  487299 main.go:141] libmachine: (functional-508115) Calling .GetSSHKeyPath
I0127 11:31:47.424581  487299 main.go:141] libmachine: (functional-508115) Calling .GetSSHUsername
I0127 11:31:47.424754  487299 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/functional-508115/id_rsa Username:docker}
I0127 11:31:47.502435  487299 build_images.go:161] Building image from path: /tmp/build.188795666.tar
I0127 11:31:47.502495  487299 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 11:31:47.512180  487299 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.188795666.tar
I0127 11:31:47.515869  487299 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.188795666.tar: stat -c "%s %y" /var/lib/minikube/build/build.188795666.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.188795666.tar': No such file or directory
I0127 11:31:47.515895  487299 ssh_runner.go:362] scp /tmp/build.188795666.tar --> /var/lib/minikube/build/build.188795666.tar (3072 bytes)
I0127 11:31:47.539789  487299 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.188795666
I0127 11:31:47.548379  487299 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.188795666 -xf /var/lib/minikube/build/build.188795666.tar
I0127 11:31:47.557031  487299 containerd.go:394] Building image: /var/lib/minikube/build/build.188795666
I0127 11:31:47.557082  487299 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.188795666 --local dockerfile=/var/lib/minikube/build/build.188795666 --output type=image,name=localhost/my-image:functional-508115
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:8f270a111f10016e6390f8a87400dd0f92a5cd3e06aa169bfe93d34744d86569
#8 exporting manifest sha256:8f270a111f10016e6390f8a87400dd0f92a5cd3e06aa169bfe93d34744d86569 0.0s done
#8 exporting config sha256:9619a6a8e8e36164b6c5add4d6bef9cb84560e4bf7317f111d461d4d9df5e03b 0.0s done
#8 naming to localhost/my-image:functional-508115 0.0s done
#8 DONE 0.4s
I0127 11:31:51.753622  487299 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.188795666 --local dockerfile=/var/lib/minikube/build/build.188795666 --output type=image,name=localhost/my-image:functional-508115: (4.196506832s)
I0127 11:31:51.753727  487299 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.188795666
I0127 11:31:51.774330  487299 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.188795666.tar
I0127 11:31:51.799515  487299 build_images.go:217] Built localhost/my-image:functional-508115 from /tmp/build.188795666.tar
I0127 11:31:51.799548  487299 build_images.go:133] succeeded building to: functional-508115
I0127 11:31:51.799553  487299 build_images.go:134] failed building to: 
I0127 11:31:51.799579  487299 main.go:141] libmachine: Making call to close driver server
I0127 11:31:51.799591  487299 main.go:141] libmachine: (functional-508115) Calling .Close
I0127 11:31:51.799871  487299 main.go:141] libmachine: (functional-508115) DBG | Closing plugin on server side
I0127 11:31:51.799920  487299 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:31:51.799928  487299 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:31:51.799940  487299 main.go:141] libmachine: Making call to close driver server
I0127 11:31:51.799950  487299 main.go:141] libmachine: (functional-508115) Calling .Close
I0127 11:31:51.800232  487299 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:31:51.800250  487299 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:31:51.800292  487299 main.go:141] libmachine: (functional-508115) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.798342798s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-508115
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image load --daemon kicbase/echo-server:functional-508115 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-508115 image load --daemon kicbase/echo-server:functional-508115 --alsologtostderr: (1.539878459s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image load --daemon kicbase/echo-server:functional-508115 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-508115
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image load --daemon kicbase/echo-server:functional-508115 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-508115 image load --daemon kicbase/echo-server:functional-508115 --alsologtostderr: (1.229382191s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image save kicbase/echo-server:functional-508115 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image rm kicbase/echo-server:functional-508115 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-508115
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 image save --daemon kicbase/echo-server:functional-508115 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-508115
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-508115 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-508115
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-508115
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-508115
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (187.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-899621 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 11:34:10.934150  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:34:38.637019  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-899621 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m7.17853187s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (187.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-899621 -- rollout status deployment/busybox: (3.553343188s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-9vrr8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-fx75g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-g7vbm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-9vrr8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-fx75g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-g7vbm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-9vrr8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-fx75g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-g7vbm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-9vrr8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-9vrr8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-fx75g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-fx75g -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-g7vbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-899621 -- exec busybox-58667487b6-g7vbm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-899621 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-899621 -v=7 --alsologtostderr: (55.097377729s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-899621 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp testdata/cp-test.txt ha-899621:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3012322772/001/cp-test_ha-899621.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621:/home/docker/cp-test.txt ha-899621-m02:/home/docker/cp-test_ha-899621_ha-899621-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m02 "sudo cat /home/docker/cp-test_ha-899621_ha-899621-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621:/home/docker/cp-test.txt ha-899621-m03:/home/docker/cp-test_ha-899621_ha-899621-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m03 "sudo cat /home/docker/cp-test_ha-899621_ha-899621-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621:/home/docker/cp-test.txt ha-899621-m04:/home/docker/cp-test_ha-899621_ha-899621-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m04 "sudo cat /home/docker/cp-test_ha-899621_ha-899621-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp testdata/cp-test.txt ha-899621-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3012322772/001/cp-test_ha-899621-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m02:/home/docker/cp-test.txt ha-899621:/home/docker/cp-test_ha-899621-m02_ha-899621.txt
E0127 11:36:23.465293  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:36:23.472098  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:36:23.484394  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:36:23.505821  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:36:23.547221  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:36:23.628658  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m02 "sudo cat /home/docker/cp-test.txt"
E0127 11:36:23.789985  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621 "sudo cat /home/docker/cp-test_ha-899621-m02_ha-899621.txt"
E0127 11:36:24.112298  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m02:/home/docker/cp-test.txt ha-899621-m03:/home/docker/cp-test_ha-899621-m02_ha-899621-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m03 "sudo cat /home/docker/cp-test_ha-899621-m02_ha-899621-m03.txt"
E0127 11:36:24.754039  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m02:/home/docker/cp-test.txt ha-899621-m04:/home/docker/cp-test_ha-899621-m02_ha-899621-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m04 "sudo cat /home/docker/cp-test_ha-899621-m02_ha-899621-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp testdata/cp-test.txt ha-899621-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3012322772/001/cp-test_ha-899621-m03.txt
E0127 11:36:26.036005  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m03:/home/docker/cp-test.txt ha-899621:/home/docker/cp-test_ha-899621-m03_ha-899621.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621 "sudo cat /home/docker/cp-test_ha-899621-m03_ha-899621.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m03:/home/docker/cp-test.txt ha-899621-m02:/home/docker/cp-test_ha-899621-m03_ha-899621-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m02 "sudo cat /home/docker/cp-test_ha-899621-m03_ha-899621-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m03:/home/docker/cp-test.txt ha-899621-m04:/home/docker/cp-test_ha-899621-m03_ha-899621-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m04 "sudo cat /home/docker/cp-test_ha-899621-m03_ha-899621-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp testdata/cp-test.txt ha-899621-m04:/home/docker/cp-test.txt
E0127 11:36:28.597486  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3012322772/001/cp-test_ha-899621-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m04:/home/docker/cp-test.txt ha-899621:/home/docker/cp-test_ha-899621-m04_ha-899621.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621 "sudo cat /home/docker/cp-test_ha-899621-m04_ha-899621.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m04:/home/docker/cp-test.txt ha-899621-m02:/home/docker/cp-test_ha-899621-m04_ha-899621-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m02 "sudo cat /home/docker/cp-test_ha-899621-m04_ha-899621-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 cp ha-899621-m04:/home/docker/cp-test.txt ha-899621-m03:/home/docker/cp-test_ha-899621-m04_ha-899621-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 ssh -n ha-899621-m03 "sudo cat /home/docker/cp-test_ha-899621-m04_ha-899621-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 node stop m02 -v=7 --alsologtostderr
E0127 11:36:33.719304  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:36:43.961626  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:37:04.443052  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:37:45.405224  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-899621 node stop m02 -v=7 --alsologtostderr: (1m30.98652239s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-899621 status -v=7 --alsologtostderr: exit status 7 (651.291807ms)

                                                
                                                
-- stdout --
	ha-899621
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-899621-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-899621-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-899621-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:38:02.568770  492003 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:38:02.568900  492003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:38:02.568911  492003 out.go:358] Setting ErrFile to fd 2...
	I0127 11:38:02.568918  492003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:38:02.569136  492003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 11:38:02.569383  492003 out.go:352] Setting JSON to false
	I0127 11:38:02.569419  492003 mustload.go:65] Loading cluster: ha-899621
	I0127 11:38:02.569525  492003 notify.go:220] Checking for updates...
	I0127 11:38:02.569896  492003 config.go:182] Loaded profile config "ha-899621": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:38:02.569928  492003 status.go:174] checking status of ha-899621 ...
	I0127 11:38:02.570358  492003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:38:02.570412  492003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:02.593139  492003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0127 11:38:02.593729  492003 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:02.594383  492003 main.go:141] libmachine: Using API Version  1
	I0127 11:38:02.594413  492003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:02.594833  492003 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:02.595019  492003 main.go:141] libmachine: (ha-899621) Calling .GetState
	I0127 11:38:02.596896  492003 status.go:371] ha-899621 host status = "Running" (err=<nil>)
	I0127 11:38:02.596916  492003 host.go:66] Checking if "ha-899621" exists ...
	I0127 11:38:02.597252  492003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:38:02.597321  492003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:02.615142  492003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37557
	I0127 11:38:02.615606  492003 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:02.616169  492003 main.go:141] libmachine: Using API Version  1
	I0127 11:38:02.616201  492003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:02.616525  492003 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:02.616714  492003 main.go:141] libmachine: (ha-899621) Calling .GetIP
	I0127 11:38:02.619293  492003 main.go:141] libmachine: (ha-899621) DBG | domain ha-899621 has defined MAC address 52:54:00:68:00:39 in network mk-ha-899621
	I0127 11:38:02.619653  492003 main.go:141] libmachine: (ha-899621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:00:39", ip: ""} in network mk-ha-899621: {Iface:virbr1 ExpiryTime:2025-01-27 12:32:21 +0000 UTC Type:0 Mac:52:54:00:68:00:39 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-899621 Clientid:01:52:54:00:68:00:39}
	I0127 11:38:02.619684  492003 main.go:141] libmachine: (ha-899621) DBG | domain ha-899621 has defined IP address 192.168.39.193 and MAC address 52:54:00:68:00:39 in network mk-ha-899621
	I0127 11:38:02.619802  492003 host.go:66] Checking if "ha-899621" exists ...
	I0127 11:38:02.620135  492003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:38:02.620180  492003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:02.636399  492003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0127 11:38:02.636901  492003 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:02.637388  492003 main.go:141] libmachine: Using API Version  1
	I0127 11:38:02.637418  492003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:02.637791  492003 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:02.638007  492003 main.go:141] libmachine: (ha-899621) Calling .DriverName
	I0127 11:38:02.638175  492003 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:38:02.638196  492003 main.go:141] libmachine: (ha-899621) Calling .GetSSHHostname
	I0127 11:38:02.641691  492003 main.go:141] libmachine: (ha-899621) DBG | domain ha-899621 has defined MAC address 52:54:00:68:00:39 in network mk-ha-899621
	I0127 11:38:02.642234  492003 main.go:141] libmachine: (ha-899621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:00:39", ip: ""} in network mk-ha-899621: {Iface:virbr1 ExpiryTime:2025-01-27 12:32:21 +0000 UTC Type:0 Mac:52:54:00:68:00:39 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-899621 Clientid:01:52:54:00:68:00:39}
	I0127 11:38:02.642273  492003 main.go:141] libmachine: (ha-899621) DBG | domain ha-899621 has defined IP address 192.168.39.193 and MAC address 52:54:00:68:00:39 in network mk-ha-899621
	I0127 11:38:02.642406  492003 main.go:141] libmachine: (ha-899621) Calling .GetSSHPort
	I0127 11:38:02.642589  492003 main.go:141] libmachine: (ha-899621) Calling .GetSSHKeyPath
	I0127 11:38:02.642757  492003 main.go:141] libmachine: (ha-899621) Calling .GetSSHUsername
	I0127 11:38:02.642909  492003 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/ha-899621/id_rsa Username:docker}
	I0127 11:38:02.729099  492003 ssh_runner.go:195] Run: systemctl --version
	I0127 11:38:02.735645  492003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:38:02.751019  492003 kubeconfig.go:125] found "ha-899621" server: "https://192.168.39.254:8443"
	I0127 11:38:02.751057  492003 api_server.go:166] Checking apiserver status ...
	I0127 11:38:02.751093  492003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:38:02.767072  492003 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1115/cgroup
	W0127 11:38:02.776542  492003 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1115/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:38:02.776615  492003 ssh_runner.go:195] Run: ls
	I0127 11:38:02.780796  492003 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 11:38:02.786069  492003 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 11:38:02.786091  492003 status.go:463] ha-899621 apiserver status = Running (err=<nil>)
	I0127 11:38:02.786117  492003 status.go:176] ha-899621 status: &{Name:ha-899621 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:38:02.786144  492003 status.go:174] checking status of ha-899621-m02 ...
	I0127 11:38:02.786416  492003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:38:02.786452  492003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:02.802138  492003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0127 11:38:02.802644  492003 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:02.803185  492003 main.go:141] libmachine: Using API Version  1
	I0127 11:38:02.803211  492003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:02.803589  492003 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:02.803792  492003 main.go:141] libmachine: (ha-899621-m02) Calling .GetState
	I0127 11:38:02.805466  492003 status.go:371] ha-899621-m02 host status = "Stopped" (err=<nil>)
	I0127 11:38:02.805481  492003 status.go:384] host is not running, skipping remaining checks
	I0127 11:38:02.805487  492003 status.go:176] ha-899621-m02 status: &{Name:ha-899621-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:38:02.805503  492003 status.go:174] checking status of ha-899621-m03 ...
	I0127 11:38:02.805769  492003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:38:02.805810  492003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:02.822233  492003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I0127 11:38:02.822660  492003 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:02.823210  492003 main.go:141] libmachine: Using API Version  1
	I0127 11:38:02.823234  492003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:02.823560  492003 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:02.823785  492003 main.go:141] libmachine: (ha-899621-m03) Calling .GetState
	I0127 11:38:02.825255  492003 status.go:371] ha-899621-m03 host status = "Running" (err=<nil>)
	I0127 11:38:02.825273  492003 host.go:66] Checking if "ha-899621-m03" exists ...
	I0127 11:38:02.825566  492003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:38:02.825612  492003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:02.840925  492003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0127 11:38:02.841342  492003 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:02.841822  492003 main.go:141] libmachine: Using API Version  1
	I0127 11:38:02.841844  492003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:02.842162  492003 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:02.842344  492003 main.go:141] libmachine: (ha-899621-m03) Calling .GetIP
	I0127 11:38:02.845205  492003 main.go:141] libmachine: (ha-899621-m03) DBG | domain ha-899621-m03 has defined MAC address 52:54:00:32:78:38 in network mk-ha-899621
	I0127 11:38:02.845728  492003 main.go:141] libmachine: (ha-899621-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:78:38", ip: ""} in network mk-ha-899621: {Iface:virbr1 ExpiryTime:2025-01-27 12:34:14 +0000 UTC Type:0 Mac:52:54:00:32:78:38 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-899621-m03 Clientid:01:52:54:00:32:78:38}
	I0127 11:38:02.845760  492003 main.go:141] libmachine: (ha-899621-m03) DBG | domain ha-899621-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:32:78:38 in network mk-ha-899621
	I0127 11:38:02.846012  492003 host.go:66] Checking if "ha-899621-m03" exists ...
	I0127 11:38:02.846425  492003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:38:02.846473  492003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:02.861803  492003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39009
	I0127 11:38:02.862214  492003 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:02.862653  492003 main.go:141] libmachine: Using API Version  1
	I0127 11:38:02.862672  492003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:02.862969  492003 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:02.863176  492003 main.go:141] libmachine: (ha-899621-m03) Calling .DriverName
	I0127 11:38:02.863333  492003 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:38:02.863350  492003 main.go:141] libmachine: (ha-899621-m03) Calling .GetSSHHostname
	I0127 11:38:02.865991  492003 main.go:141] libmachine: (ha-899621-m03) DBG | domain ha-899621-m03 has defined MAC address 52:54:00:32:78:38 in network mk-ha-899621
	I0127 11:38:02.866418  492003 main.go:141] libmachine: (ha-899621-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:78:38", ip: ""} in network mk-ha-899621: {Iface:virbr1 ExpiryTime:2025-01-27 12:34:14 +0000 UTC Type:0 Mac:52:54:00:32:78:38 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-899621-m03 Clientid:01:52:54:00:32:78:38}
	I0127 11:38:02.866449  492003 main.go:141] libmachine: (ha-899621-m03) DBG | domain ha-899621-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:32:78:38 in network mk-ha-899621
	I0127 11:38:02.866590  492003 main.go:141] libmachine: (ha-899621-m03) Calling .GetSSHPort
	I0127 11:38:02.866747  492003 main.go:141] libmachine: (ha-899621-m03) Calling .GetSSHKeyPath
	I0127 11:38:02.866881  492003 main.go:141] libmachine: (ha-899621-m03) Calling .GetSSHUsername
	I0127 11:38:02.867002  492003 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/ha-899621-m03/id_rsa Username:docker}
	I0127 11:38:02.956644  492003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:38:02.976555  492003 kubeconfig.go:125] found "ha-899621" server: "https://192.168.39.254:8443"
	I0127 11:38:02.976581  492003 api_server.go:166] Checking apiserver status ...
	I0127 11:38:02.976609  492003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:38:02.991499  492003 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0127 11:38:03.000606  492003 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:38:03.000653  492003 ssh_runner.go:195] Run: ls
	I0127 11:38:03.004643  492003 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 11:38:03.009579  492003 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 11:38:03.009613  492003 status.go:463] ha-899621-m03 apiserver status = Running (err=<nil>)
	I0127 11:38:03.009635  492003 status.go:176] ha-899621-m03 status: &{Name:ha-899621-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:38:03.009658  492003 status.go:174] checking status of ha-899621-m04 ...
	I0127 11:38:03.010177  492003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:38:03.010221  492003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:03.027028  492003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41189
	I0127 11:38:03.027470  492003 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:03.028002  492003 main.go:141] libmachine: Using API Version  1
	I0127 11:38:03.028036  492003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:03.028486  492003 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:03.028751  492003 main.go:141] libmachine: (ha-899621-m04) Calling .GetState
	I0127 11:38:03.030320  492003 status.go:371] ha-899621-m04 host status = "Running" (err=<nil>)
	I0127 11:38:03.030338  492003 host.go:66] Checking if "ha-899621-m04" exists ...
	I0127 11:38:03.030655  492003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:38:03.030698  492003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:03.046294  492003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39075
	I0127 11:38:03.046696  492003 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:03.047193  492003 main.go:141] libmachine: Using API Version  1
	I0127 11:38:03.047218  492003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:03.047510  492003 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:03.047740  492003 main.go:141] libmachine: (ha-899621-m04) Calling .GetIP
	I0127 11:38:03.050541  492003 main.go:141] libmachine: (ha-899621-m04) DBG | domain ha-899621-m04 has defined MAC address 52:54:00:c9:88:72 in network mk-ha-899621
	I0127 11:38:03.051011  492003 main.go:141] libmachine: (ha-899621-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:88:72", ip: ""} in network mk-ha-899621: {Iface:virbr1 ExpiryTime:2025-01-27 12:35:37 +0000 UTC Type:0 Mac:52:54:00:c9:88:72 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-899621-m04 Clientid:01:52:54:00:c9:88:72}
	I0127 11:38:03.051055  492003 main.go:141] libmachine: (ha-899621-m04) DBG | domain ha-899621-m04 has defined IP address 192.168.39.95 and MAC address 52:54:00:c9:88:72 in network mk-ha-899621
	I0127 11:38:03.051219  492003 host.go:66] Checking if "ha-899621-m04" exists ...
	I0127 11:38:03.051606  492003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:38:03.051654  492003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:03.065946  492003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35825
	I0127 11:38:03.066401  492003 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:03.066974  492003 main.go:141] libmachine: Using API Version  1
	I0127 11:38:03.066996  492003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:03.067296  492003 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:03.067497  492003 main.go:141] libmachine: (ha-899621-m04) Calling .DriverName
	I0127 11:38:03.067684  492003 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:38:03.067709  492003 main.go:141] libmachine: (ha-899621-m04) Calling .GetSSHHostname
	I0127 11:38:03.070188  492003 main.go:141] libmachine: (ha-899621-m04) DBG | domain ha-899621-m04 has defined MAC address 52:54:00:c9:88:72 in network mk-ha-899621
	I0127 11:38:03.070552  492003 main.go:141] libmachine: (ha-899621-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:88:72", ip: ""} in network mk-ha-899621: {Iface:virbr1 ExpiryTime:2025-01-27 12:35:37 +0000 UTC Type:0 Mac:52:54:00:c9:88:72 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-899621-m04 Clientid:01:52:54:00:c9:88:72}
	I0127 11:38:03.070579  492003 main.go:141] libmachine: (ha-899621-m04) DBG | domain ha-899621-m04 has defined IP address 192.168.39.95 and MAC address 52:54:00:c9:88:72 in network mk-ha-899621
	I0127 11:38:03.070755  492003 main.go:141] libmachine: (ha-899621-m04) Calling .GetSSHPort
	I0127 11:38:03.070944  492003 main.go:141] libmachine: (ha-899621-m04) Calling .GetSSHKeyPath
	I0127 11:38:03.071113  492003 main.go:141] libmachine: (ha-899621-m04) Calling .GetSSHUsername
	I0127 11:38:03.071263  492003 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/ha-899621-m04/id_rsa Username:docker}
	I0127 11:38:03.152080  492003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:38:03.166924  492003 status.go:176] ha-899621-m04 status: &{Name:ha-899621-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-899621 node start m02 -v=7 --alsologtostderr: (40.271360548s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (463.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-899621 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-899621 -v=7 --alsologtostderr
E0127 11:39:07.326571  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:39:10.933730  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:41:23.465031  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:41:51.168347  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-899621 -v=7 --alsologtostderr: (4m34.119603499s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-899621 --wait=true -v=7 --alsologtostderr
E0127 11:44:10.934092  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:33.998600  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:46:23.465256  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-899621 --wait=true -v=7 --alsologtostderr: (3m9.479484949s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-899621
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (463.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-899621 node delete m03 -v=7 --alsologtostderr: (5.922842845s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 stop -v=7 --alsologtostderr
E0127 11:49:10.934634  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-899621 stop -v=7 --alsologtostderr: (4m32.794317739s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-899621 status -v=7 --alsologtostderr: exit status 7 (112.942757ms)

                                                
                                                
-- stdout --
	ha-899621
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-899621-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-899621-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:51:09.736377  495920 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:51:09.736491  495920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:51:09.736500  495920 out.go:358] Setting ErrFile to fd 2...
	I0127 11:51:09.736504  495920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:51:09.736650  495920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 11:51:09.736857  495920 out.go:352] Setting JSON to false
	I0127 11:51:09.736886  495920 mustload.go:65] Loading cluster: ha-899621
	I0127 11:51:09.737024  495920 notify.go:220] Checking for updates...
	I0127 11:51:09.737336  495920 config.go:182] Loaded profile config "ha-899621": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:51:09.737368  495920 status.go:174] checking status of ha-899621 ...
	I0127 11:51:09.737878  495920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:51:09.737930  495920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:51:09.760083  495920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
	I0127 11:51:09.760522  495920 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:51:09.761120  495920 main.go:141] libmachine: Using API Version  1
	I0127 11:51:09.761147  495920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:51:09.761528  495920 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:51:09.761751  495920 main.go:141] libmachine: (ha-899621) Calling .GetState
	I0127 11:51:09.763218  495920 status.go:371] ha-899621 host status = "Stopped" (err=<nil>)
	I0127 11:51:09.763236  495920 status.go:384] host is not running, skipping remaining checks
	I0127 11:51:09.763243  495920 status.go:176] ha-899621 status: &{Name:ha-899621 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:51:09.763277  495920 status.go:174] checking status of ha-899621-m02 ...
	I0127 11:51:09.763556  495920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:51:09.763590  495920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:51:09.777884  495920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37905
	I0127 11:51:09.778320  495920 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:51:09.778815  495920 main.go:141] libmachine: Using API Version  1
	I0127 11:51:09.778847  495920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:51:09.779175  495920 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:51:09.779374  495920 main.go:141] libmachine: (ha-899621-m02) Calling .GetState
	I0127 11:51:09.780859  495920 status.go:371] ha-899621-m02 host status = "Stopped" (err=<nil>)
	I0127 11:51:09.780876  495920 status.go:384] host is not running, skipping remaining checks
	I0127 11:51:09.780883  495920 status.go:176] ha-899621-m02 status: &{Name:ha-899621-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:51:09.780903  495920 status.go:174] checking status of ha-899621-m04 ...
	I0127 11:51:09.781297  495920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 11:51:09.781350  495920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:51:09.795950  495920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
	I0127 11:51:09.796331  495920 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:51:09.796861  495920 main.go:141] libmachine: Using API Version  1
	I0127 11:51:09.796878  495920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:51:09.797316  495920 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:51:09.797555  495920 main.go:141] libmachine: (ha-899621-m04) Calling .GetState
	I0127 11:51:09.799000  495920 status.go:371] ha-899621-m04 host status = "Stopped" (err=<nil>)
	I0127 11:51:09.799014  495920 status.go:384] host is not running, skipping remaining checks
	I0127 11:51:09.799019  495920 status.go:176] ha-899621-m04 status: &{Name:ha-899621-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (124.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-899621 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 11:51:23.465471  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:52:46.530464  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-899621 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m4.002403353s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (124.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-899621 --control-plane -v=7 --alsologtostderr
E0127 11:54:10.934359  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-899621 --control-plane -v=7 --alsologtostderr: (1m12.411260553s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-899621 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-423663 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-423663 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (55.109726187s)
--- PASS: TestJSONOutput/start/Command (55.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-423663 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-423663 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.45s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-423663 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-423663 --output=json --user=testUser: (6.449053345s)
--- PASS: TestJSONOutput/stop/Command (6.45s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-915264 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-915264 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.973254ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6fb546a3-7377-47f5-b00e-bcef391cfa36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-915264] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1844d0d4-15c0-452e-97cc-4ca70a0c0bb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20318"}}
	{"specversion":"1.0","id":"fed7b196-f093-4c1b-8f63-ec0c8ea95310","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7f772fd9-8518-4c3b-a7c6-f9c9250a039b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig"}}
	{"specversion":"1.0","id":"a8b6b77b-af7f-4165-88aa-907e3b47f507","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube"}}
	{"specversion":"1.0","id":"f0a4bb77-1398-494f-bdad-7167eb115f36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"688769e1-ddac-49b2-a5c2-37865ccdc8c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b62e8c3c-296f-437f-91d7-7e45554bbe97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-915264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-915264
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (86.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-054679 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-054679 --driver=kvm2  --container-runtime=containerd: (39.714783524s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-137749 --driver=kvm2  --container-runtime=containerd
E0127 11:56:23.466639  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-137749 --driver=kvm2  --container-runtime=containerd: (43.435845314s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-054679
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-137749
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-137749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-137749
helpers_test.go:175: Cleaning up "first-054679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-054679
--- PASS: TestMinikubeProfile (86.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-635011 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-635011 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.876549502s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-635011 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-635011 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-651666 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-651666 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.839446399s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-651666 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-651666 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-635011 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-651666 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-651666 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-651666
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-651666: (1.286922554s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-651666
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-651666: (22.670593979s)
--- PASS: TestMountStart/serial/RestartStopped (23.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-651666 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-651666 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-594983 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 11:59:10.934298  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-594983 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m48.033281081s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-594983 -- rollout status deployment/busybox: (3.479444103s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- exec busybox-58667487b6-hrhlk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- exec busybox-58667487b6-st8r9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- exec busybox-58667487b6-hrhlk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- exec busybox-58667487b6-st8r9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- exec busybox-58667487b6-hrhlk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- exec busybox-58667487b6-st8r9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- exec busybox-58667487b6-hrhlk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- exec busybox-58667487b6-hrhlk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- exec busybox-58667487b6-st8r9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-594983 -- exec busybox-58667487b6-st8r9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-594983 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-594983 -v 3 --alsologtostderr: (51.526917929s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.10s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-594983 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp testdata/cp-test.txt multinode-594983:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp multinode-594983:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile479543764/001/cp-test_multinode-594983.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp multinode-594983:/home/docker/cp-test.txt multinode-594983-m02:/home/docker/cp-test_multinode-594983_multinode-594983-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m02 "sudo cat /home/docker/cp-test_multinode-594983_multinode-594983-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp multinode-594983:/home/docker/cp-test.txt multinode-594983-m03:/home/docker/cp-test_multinode-594983_multinode-594983-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m03 "sudo cat /home/docker/cp-test_multinode-594983_multinode-594983-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp testdata/cp-test.txt multinode-594983-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp multinode-594983-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile479543764/001/cp-test_multinode-594983-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp multinode-594983-m02:/home/docker/cp-test.txt multinode-594983:/home/docker/cp-test_multinode-594983-m02_multinode-594983.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983 "sudo cat /home/docker/cp-test_multinode-594983-m02_multinode-594983.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp multinode-594983-m02:/home/docker/cp-test.txt multinode-594983-m03:/home/docker/cp-test_multinode-594983-m02_multinode-594983-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m03 "sudo cat /home/docker/cp-test_multinode-594983-m02_multinode-594983-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp testdata/cp-test.txt multinode-594983-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp multinode-594983-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile479543764/001/cp-test_multinode-594983-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp multinode-594983-m03:/home/docker/cp-test.txt multinode-594983:/home/docker/cp-test_multinode-594983-m03_multinode-594983.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983 "sudo cat /home/docker/cp-test_multinode-594983-m03_multinode-594983.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 cp multinode-594983-m03:/home/docker/cp-test.txt multinode-594983-m02:/home/docker/cp-test_multinode-594983-m03_multinode-594983-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 ssh -n multinode-594983-m02 "sudo cat /home/docker/cp-test_multinode-594983-m03_multinode-594983-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-594983 node stop m03: (1.273957885s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-594983 status: exit status 7 (419.363682ms)

                                                
                                                
-- stdout --
	multinode-594983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-594983-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-594983-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-594983 status --alsologtostderr: exit status 7 (420.124725ms)

                                                
                                                
-- stdout --
	multinode-594983
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-594983-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-594983-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:01:22.386459  503558 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:01:22.386548  503558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:01:22.386556  503558 out.go:358] Setting ErrFile to fd 2...
	I0127 12:01:22.386560  503558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:01:22.386722  503558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 12:01:22.386879  503558 out.go:352] Setting JSON to false
	I0127 12:01:22.386904  503558 mustload.go:65] Loading cluster: multinode-594983
	I0127 12:01:22.387030  503558 notify.go:220] Checking for updates...
	I0127 12:01:22.387299  503558 config.go:182] Loaded profile config "multinode-594983": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:01:22.387318  503558 status.go:174] checking status of multinode-594983 ...
	I0127 12:01:22.387726  503558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:01:22.387761  503558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:01:22.403025  503558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36025
	I0127 12:01:22.403430  503558 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:01:22.403964  503558 main.go:141] libmachine: Using API Version  1
	I0127 12:01:22.403987  503558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:01:22.404401  503558 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:01:22.404677  503558 main.go:141] libmachine: (multinode-594983) Calling .GetState
	I0127 12:01:22.406393  503558 status.go:371] multinode-594983 host status = "Running" (err=<nil>)
	I0127 12:01:22.406415  503558 host.go:66] Checking if "multinode-594983" exists ...
	I0127 12:01:22.406838  503558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:01:22.406891  503558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:01:22.422380  503558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I0127 12:01:22.422775  503558 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:01:22.423203  503558 main.go:141] libmachine: Using API Version  1
	I0127 12:01:22.423224  503558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:01:22.423580  503558 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:01:22.423781  503558 main.go:141] libmachine: (multinode-594983) Calling .GetIP
	I0127 12:01:22.426750  503558 main.go:141] libmachine: (multinode-594983) DBG | domain multinode-594983 has defined MAC address 52:54:00:d1:23:15 in network mk-multinode-594983
	I0127 12:01:22.427153  503558 main.go:141] libmachine: (multinode-594983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:15", ip: ""} in network mk-multinode-594983: {Iface:virbr1 ExpiryTime:2025-01-27 12:58:40 +0000 UTC Type:0 Mac:52:54:00:d1:23:15 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-594983 Clientid:01:52:54:00:d1:23:15}
	I0127 12:01:22.427177  503558 main.go:141] libmachine: (multinode-594983) DBG | domain multinode-594983 has defined IP address 192.168.39.87 and MAC address 52:54:00:d1:23:15 in network mk-multinode-594983
	I0127 12:01:22.427332  503558 host.go:66] Checking if "multinode-594983" exists ...
	I0127 12:01:22.427618  503558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:01:22.427655  503558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:01:22.443285  503558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0127 12:01:22.443609  503558 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:01:22.444086  503558 main.go:141] libmachine: Using API Version  1
	I0127 12:01:22.444129  503558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:01:22.444418  503558 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:01:22.444579  503558 main.go:141] libmachine: (multinode-594983) Calling .DriverName
	I0127 12:01:22.444773  503558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:01:22.444800  503558 main.go:141] libmachine: (multinode-594983) Calling .GetSSHHostname
	I0127 12:01:22.448074  503558 main.go:141] libmachine: (multinode-594983) DBG | domain multinode-594983 has defined MAC address 52:54:00:d1:23:15 in network mk-multinode-594983
	I0127 12:01:22.448625  503558 main.go:141] libmachine: (multinode-594983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:15", ip: ""} in network mk-multinode-594983: {Iface:virbr1 ExpiryTime:2025-01-27 12:58:40 +0000 UTC Type:0 Mac:52:54:00:d1:23:15 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-594983 Clientid:01:52:54:00:d1:23:15}
	I0127 12:01:22.448660  503558 main.go:141] libmachine: (multinode-594983) DBG | domain multinode-594983 has defined IP address 192.168.39.87 and MAC address 52:54:00:d1:23:15 in network mk-multinode-594983
	I0127 12:01:22.448719  503558 main.go:141] libmachine: (multinode-594983) Calling .GetSSHPort
	I0127 12:01:22.448963  503558 main.go:141] libmachine: (multinode-594983) Calling .GetSSHKeyPath
	I0127 12:01:22.449175  503558 main.go:141] libmachine: (multinode-594983) Calling .GetSSHUsername
	I0127 12:01:22.449328  503558 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/multinode-594983/id_rsa Username:docker}
	I0127 12:01:22.531322  503558 ssh_runner.go:195] Run: systemctl --version
	I0127 12:01:22.536941  503558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:01:22.550275  503558 kubeconfig.go:125] found "multinode-594983" server: "https://192.168.39.87:8443"
	I0127 12:01:22.550314  503558 api_server.go:166] Checking apiserver status ...
	I0127 12:01:22.550350  503558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:01:22.562673  503558 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1153/cgroup
	W0127 12:01:22.571591  503558 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1153/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:01:22.571625  503558 ssh_runner.go:195] Run: ls
	I0127 12:01:22.576380  503558 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0127 12:01:22.582860  503558 api_server.go:279] https://192.168.39.87:8443/healthz returned 200:
	ok
	I0127 12:01:22.582894  503558 status.go:463] multinode-594983 apiserver status = Running (err=<nil>)
	I0127 12:01:22.582908  503558 status.go:176] multinode-594983 status: &{Name:multinode-594983 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:01:22.582934  503558 status.go:174] checking status of multinode-594983-m02 ...
	I0127 12:01:22.583404  503558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:01:22.583466  503558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:01:22.600260  503558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0127 12:01:22.600726  503558 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:01:22.601223  503558 main.go:141] libmachine: Using API Version  1
	I0127 12:01:22.601247  503558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:01:22.601591  503558 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:01:22.601785  503558 main.go:141] libmachine: (multinode-594983-m02) Calling .GetState
	I0127 12:01:22.603300  503558 status.go:371] multinode-594983-m02 host status = "Running" (err=<nil>)
	I0127 12:01:22.603315  503558 host.go:66] Checking if "multinode-594983-m02" exists ...
	I0127 12:01:22.603577  503558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:01:22.603618  503558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:01:22.619261  503558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I0127 12:01:22.619641  503558 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:01:22.620049  503558 main.go:141] libmachine: Using API Version  1
	I0127 12:01:22.620068  503558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:01:22.620451  503558 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:01:22.620632  503558 main.go:141] libmachine: (multinode-594983-m02) Calling .GetIP
	I0127 12:01:22.623447  503558 main.go:141] libmachine: (multinode-594983-m02) DBG | domain multinode-594983-m02 has defined MAC address 52:54:00:97:0c:2b in network mk-multinode-594983
	I0127 12:01:22.623876  503558 main.go:141] libmachine: (multinode-594983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:0c:2b", ip: ""} in network mk-multinode-594983: {Iface:virbr1 ExpiryTime:2025-01-27 12:59:38 +0000 UTC Type:0 Mac:52:54:00:97:0c:2b Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-594983-m02 Clientid:01:52:54:00:97:0c:2b}
	I0127 12:01:22.623907  503558 main.go:141] libmachine: (multinode-594983-m02) DBG | domain multinode-594983-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:97:0c:2b in network mk-multinode-594983
	I0127 12:01:22.624077  503558 host.go:66] Checking if "multinode-594983-m02" exists ...
	I0127 12:01:22.624394  503558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:01:22.624430  503558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:01:22.641202  503558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43025
	I0127 12:01:22.641626  503558 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:01:22.642106  503558 main.go:141] libmachine: Using API Version  1
	I0127 12:01:22.642136  503558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:01:22.642522  503558 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:01:22.642743  503558 main.go:141] libmachine: (multinode-594983-m02) Calling .DriverName
	I0127 12:01:22.642935  503558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:01:22.642961  503558 main.go:141] libmachine: (multinode-594983-m02) Calling .GetSSHHostname
	I0127 12:01:22.645378  503558 main.go:141] libmachine: (multinode-594983-m02) DBG | domain multinode-594983-m02 has defined MAC address 52:54:00:97:0c:2b in network mk-multinode-594983
	I0127 12:01:22.645792  503558 main.go:141] libmachine: (multinode-594983-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:0c:2b", ip: ""} in network mk-multinode-594983: {Iface:virbr1 ExpiryTime:2025-01-27 12:59:38 +0000 UTC Type:0 Mac:52:54:00:97:0c:2b Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-594983-m02 Clientid:01:52:54:00:97:0c:2b}
	I0127 12:01:22.645826  503558 main.go:141] libmachine: (multinode-594983-m02) DBG | domain multinode-594983-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:97:0c:2b in network mk-multinode-594983
	I0127 12:01:22.646109  503558 main.go:141] libmachine: (multinode-594983-m02) Calling .GetSSHPort
	I0127 12:01:22.646287  503558 main.go:141] libmachine: (multinode-594983-m02) Calling .GetSSHKeyPath
	I0127 12:01:22.646461  503558 main.go:141] libmachine: (multinode-594983-m02) Calling .GetSSHUsername
	I0127 12:01:22.646601  503558 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/multinode-594983-m02/id_rsa Username:docker}
	I0127 12:01:22.722876  503558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:01:22.735837  503558 status.go:176] multinode-594983-m02 status: &{Name:multinode-594983-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:01:22.735883  503558 status.go:174] checking status of multinode-594983-m03 ...
	I0127 12:01:22.736193  503558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:01:22.736234  503558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:01:22.752495  503558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0127 12:01:22.752994  503558 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:01:22.753645  503558 main.go:141] libmachine: Using API Version  1
	I0127 12:01:22.753682  503558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:01:22.754107  503558 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:01:22.754355  503558 main.go:141] libmachine: (multinode-594983-m03) Calling .GetState
	I0127 12:01:22.756063  503558 status.go:371] multinode-594983-m03 host status = "Stopped" (err=<nil>)
	I0127 12:01:22.756076  503558 status.go:384] host is not running, skipping remaining checks
	I0127 12:01:22.756081  503558 status.go:176] multinode-594983-m03 status: &{Name:multinode-594983-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (33.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 node start m03 -v=7 --alsologtostderr
E0127 12:01:23.464908  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-594983 node start m03 -v=7 --alsologtostderr: (32.758495372s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (33.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (310.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-594983
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-594983
E0127 12:02:14.000667  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:04:10.940315  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-594983: (3m3.079962679s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-594983 --wait=true -v=8 --alsologtostderr
E0127 12:06:23.464921  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-594983 --wait=true -v=8 --alsologtostderr: (2m7.753620682s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-594983
--- PASS: TestMultiNode/serial/RestartKeepsNodes (310.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-594983 node delete m03: (1.588861477s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 stop
E0127 12:09:10.940206  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:09:26.534382  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-594983 stop: (3m1.68146209s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-594983 status: exit status 7 (95.031661ms)

                                                
                                                
-- stdout --
	multinode-594983
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-594983-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-594983 status --alsologtostderr: exit status 7 (86.898005ms)

                                                
                                                
-- stdout --
	multinode-594983
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-594983-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:10:11.024597  506245 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:10:11.024697  506245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:10:11.024705  506245 out.go:358] Setting ErrFile to fd 2...
	I0127 12:10:11.024709  506245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:10:11.024924  506245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 12:10:11.025094  506245 out.go:352] Setting JSON to false
	I0127 12:10:11.025129  506245 mustload.go:65] Loading cluster: multinode-594983
	I0127 12:10:11.025233  506245 notify.go:220] Checking for updates...
	I0127 12:10:11.025524  506245 config.go:182] Loaded profile config "multinode-594983": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:10:11.025544  506245 status.go:174] checking status of multinode-594983 ...
	I0127 12:10:11.025936  506245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:10:11.026006  506245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:10:11.041209  506245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I0127 12:10:11.041722  506245 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:10:11.042441  506245 main.go:141] libmachine: Using API Version  1
	I0127 12:10:11.042467  506245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:10:11.042829  506245 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:10:11.043020  506245 main.go:141] libmachine: (multinode-594983) Calling .GetState
	I0127 12:10:11.044718  506245 status.go:371] multinode-594983 host status = "Stopped" (err=<nil>)
	I0127 12:10:11.044745  506245 status.go:384] host is not running, skipping remaining checks
	I0127 12:10:11.044753  506245 status.go:176] multinode-594983 status: &{Name:multinode-594983 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:10:11.044775  506245 status.go:174] checking status of multinode-594983-m02 ...
	I0127 12:10:11.045066  506245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 12:10:11.045117  506245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:10:11.060147  506245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0127 12:10:11.060568  506245 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:10:11.061017  506245 main.go:141] libmachine: Using API Version  1
	I0127 12:10:11.061044  506245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:10:11.061391  506245 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:10:11.061576  506245 main.go:141] libmachine: (multinode-594983-m02) Calling .GetState
	I0127 12:10:11.062927  506245 status.go:371] multinode-594983-m02 host status = "Stopped" (err=<nil>)
	I0127 12:10:11.062945  506245 status.go:384] host is not running, skipping remaining checks
	I0127 12:10:11.062953  506245 status.go:176] multinode-594983-m02 status: &{Name:multinode-594983-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-594983 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 12:11:23.465411  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-594983 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m46.414768347s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-594983 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (106.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-594983
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-594983-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-594983-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (63.611615ms)

                                                
                                                
-- stdout --
	* [multinode-594983-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-594983-m02' is duplicated with machine name 'multinode-594983-m02' in profile 'multinode-594983'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-594983-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-594983-m03 --driver=kvm2  --container-runtime=containerd: (41.74085742s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-594983
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-594983: exit status 80 (213.721686ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-594983 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-594983-m03 already exists in multinode-594983-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-594983-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.87s)

                                                
                                    
x
+
TestPreload (204.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-870881 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-870881 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m22.217460597s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-870881 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-870881 image pull gcr.io/k8s-minikube/busybox: (2.339737412s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-870881
E0127 12:14:10.936851  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-870881: (6.474159669s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-870881 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-870881 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m52.447943146s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-870881 image list
helpers_test.go:175: Cleaning up "test-preload-870881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-870881
--- PASS: TestPreload (204.55s)

                                                
                                    
x
+
TestScheduledStopUnix (112.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-738640 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0127 12:16:23.466163  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-738640 --memory=2048 --driver=kvm2  --container-runtime=containerd: (41.342099633s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-738640 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-738640 -n scheduled-stop-738640
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-738640 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 12:16:48.709063  478387 retry.go:31] will retry after 72.564µs: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.710227  478387 retry.go:31] will retry after 156.976µs: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.711396  478387 retry.go:31] will retry after 174.632µs: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.712526  478387 retry.go:31] will retry after 430.126µs: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.713657  478387 retry.go:31] will retry after 683.532µs: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.714783  478387 retry.go:31] will retry after 633.235µs: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.715906  478387 retry.go:31] will retry after 1.581098ms: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.718110  478387 retry.go:31] will retry after 2.313309ms: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.721326  478387 retry.go:31] will retry after 2.595067ms: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.724526  478387 retry.go:31] will retry after 3.687255ms: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.728729  478387 retry.go:31] will retry after 7.004939ms: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.735981  478387 retry.go:31] will retry after 11.937204ms: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.748164  478387 retry.go:31] will retry after 6.605793ms: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.755360  478387 retry.go:31] will retry after 25.772092ms: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
I0127 12:16:48.781583  478387 retry.go:31] will retry after 19.879ms: open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/scheduled-stop-738640/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-738640 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-738640 -n scheduled-stop-738640
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-738640
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-738640 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-738640
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-738640: exit status 7 (76.925465ms)

                                                
                                                
-- stdout --
	scheduled-stop-738640
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-738640 -n scheduled-stop-738640
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-738640 -n scheduled-stop-738640: exit status 7 (65.966671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-738640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-738640
--- PASS: TestScheduledStopUnix (112.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (165.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4172997823 start -p running-upgrade-119528 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0127 12:18:54.002865  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:19:10.934487  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4172997823 start -p running-upgrade-119528 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m41.86401218s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-119528 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-119528 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (59.722871293s)
helpers_test.go:175: Cleaning up "running-upgrade-119528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-119528
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-119528: (1.208988383s)
--- PASS: TestRunningBinaryUpgrade (165.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (243.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-570656 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-570656 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m53.544894499s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-570656
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-570656: (2.452460208s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-570656 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-570656 status --format={{.Host}}: exit status 7 (68.971676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-570656 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-570656 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m8.671325369s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-570656 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-570656 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-570656 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (85.825915ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-570656] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-570656
	    minikube start -p kubernetes-upgrade-570656 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5706562 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-570656 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-570656 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-570656 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (56.95975645s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-570656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-570656
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-570656: (1.306723157s)
--- PASS: TestKubernetesUpgrade (243.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-260451 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-260451 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (89.556886ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-260451] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (119.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-260451 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-260451 --driver=kvm2  --container-runtime=containerd: (1m59.546245682s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-260451 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (119.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (72.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-260451 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-260451 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m11.018215238s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-260451 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-260451 status -o json: exit status 2 (224.996496ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-260451","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-260451
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-260451: (1.001718965s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (72.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-260451 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0127 12:21:23.464777  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-260451 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.272694558s)
--- PASS: TestNoKubernetes/serial/Start (29.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-260451 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-260451 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.7498ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-260451
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-260451: (1.294382994s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (63.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-260451 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-260451 --driver=kvm2  --container-runtime=containerd: (1m3.34811064s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (63.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (112.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2830196556 start -p stopped-upgrade-558454 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2830196556 start -p stopped-upgrade-558454 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m10.014604561s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2830196556 -p stopped-upgrade-558454 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2830196556 -p stopped-upgrade-558454 stop: (1.815579232s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-558454 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-558454 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (40.288559319s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (112.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-260451 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-260451 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.974307ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-662609 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-662609 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (104.413595ms)

                                                
                                                
-- stdout --
	* [false-662609] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:22:51.578158  515430 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:22:51.578258  515430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:22:51.578265  515430 out.go:358] Setting ErrFile to fd 2...
	I0127 12:22:51.578272  515430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:22:51.578489  515430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
	I0127 12:22:51.579045  515430 out.go:352] Setting JSON to false
	I0127 12:22:51.580044  515430 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11115,"bootTime":1737969457,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:22:51.580154  515430 start.go:139] virtualization: kvm guest
	I0127 12:22:51.582335  515430 out.go:177] * [false-662609] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:22:51.583551  515430 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:22:51.583553  515430 notify.go:220] Checking for updates...
	I0127 12:22:51.584860  515430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:22:51.586244  515430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
	I0127 12:22:51.587464  515430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
	I0127 12:22:51.588710  515430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:22:51.589842  515430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:22:51.591261  515430 config.go:182] Loaded profile config "cert-expiration-455827": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:22:51.591390  515430 config.go:182] Loaded profile config "kubernetes-upgrade-570656": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:22:51.591525  515430 config.go:182] Loaded profile config "stopped-upgrade-558454": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0127 12:22:51.591620  515430 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:22:51.628244  515430 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:22:51.629370  515430 start.go:297] selected driver: kvm2
	I0127 12:22:51.629391  515430 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:22:51.629405  515430 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:22:51.631445  515430 out.go:201] 
	W0127 12:22:51.632563  515430 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0127 12:22:51.633759  515430 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-662609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-662609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:20:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.13:8443
name: cert-expiration-455827
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:22:38 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.68:8443
name: kubernetes-upgrade-570656
contexts:
- context:
cluster: cert-expiration-455827
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:20:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-455827
name: cert-expiration-455827
- context:
cluster: kubernetes-upgrade-570656
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:22:38 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: kubernetes-upgrade-570656
name: kubernetes-upgrade-570656
current-context: kubernetes-upgrade-570656
kind: Config
preferences: {}
users:
- name: cert-expiration-455827
user:
client-certificate: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/cert-expiration-455827/client.crt
client-key: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/cert-expiration-455827/client.key
- name: kubernetes-upgrade-570656
user:
client-certificate: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kubernetes-upgrade-570656/client.crt
client-key: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kubernetes-upgrade-570656/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-662609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-662609"

                                                
                                                
----------------------- debugLogs end: false-662609 [took: 3.139647291s] --------------------------------
helpers_test.go:175: Cleaning up "false-662609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-662609
--- PASS: TestNetworkPlugins/group/false (3.40s)

                                                
                                    
x
+
TestPause/serial/Start (64.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-040855 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-040855 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m4.720899287s)
--- PASS: TestPause/serial/Start (64.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (76.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m16.937123985s)
--- PASS: TestNetworkPlugins/group/auto/Start (76.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-040855 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0127 12:24:10.934189  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-040855 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (56.408129232s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (56.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-558454
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m7.586980418s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m32.169414406s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-662609 "pgrep -a kubelet"
I0127 12:24:55.201048  478387 config.go:182] Loaded profile config "auto-662609": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-662609 replace --force -f testdata/netcat-deployment.yaml
I0127 12:24:56.624710  478387 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hddq2" [ef75ef42-5a25-4b2c-953a-958afb0d39c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hddq2" [ef75ef42-5a25-4b2c-953a-958afb0d39c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00389202s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.16s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-040855 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-040855 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-040855 --output=json --layout=cluster: exit status 2 (263.966285ms)

                                                
                                                
-- stdout --
	{"Name":"pause-040855","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-040855","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-040855 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-040855 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-040855 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-040855 --alsologtostderr -v=5: (1.00931515s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.78s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-662609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m23.378696707s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m23.262851057s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-g8m9g" [c1d538cb-f0a3-4c49-892c-de5a0592b185] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005486127s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-662609 "pgrep -a kubelet"
I0127 12:25:33.673552  478387 config.go:182] Loaded profile config "kindnet-662609": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-662609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-q4jr2" [bb8583b5-5f44-437e-8b38-d6cb72ec9a59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-q4jr2" [bb8583b5-5f44-437e-8b38-d6cb72ec9a59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004929305s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-662609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (85.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0127 12:26:06.536398  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m25.43516573s)
--- PASS: TestNetworkPlugins/group/flannel/Start (85.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wflfm" [9f9d77dd-2872-4ed6-ac34-44cc92d4e07b] Running
E0127 12:26:23.464814  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005601299s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-662609 "pgrep -a kubelet"
I0127 12:26:26.111999  478387 config.go:182] Loaded profile config "calico-662609": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-662609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-85rqq" [2c82791b-d758-4533-bacd-bfd32fdc7a51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-85rqq" [2c82791b-d758-4533-bacd-bfd32fdc7a51] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.005208033s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-662609 "pgrep -a kubelet"
I0127 12:26:31.713466  478387 config.go:182] Loaded profile config "custom-flannel-662609": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-662609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wcbgb" [8b6f383a-95fb-4528-95fe-1e358f8869ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wcbgb" [8b6f383a-95fb-4528-95fe-1e358f8869ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004873504s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-662609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-662609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-662609 "pgrep -a kubelet"
I0127 12:26:48.176166  478387 config.go:182] Loaded profile config "enable-default-cni-662609": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-662609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-h5tc8" [f472eb5d-f192-46c1-b574-fb2895c94fc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-h5tc8" [f472eb5d-f192-46c1-b574-fb2895c94fc6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005440326s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-662609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m6.304519855s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-662609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (185.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-858845 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-858845 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m5.568062449s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (185.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (103.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-215237 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-215237 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m43.523144748s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (103.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-p4p6j" [892abcf3-2733-4576-b58d-969930f3862c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003914654s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-662609 "pgrep -a kubelet"
I0127 12:27:31.672326  478387 config.go:182] Loaded profile config "flannel-662609": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-662609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5xgwt" [a34d2ffa-3e06-49f6-a4ad-130ee5f46a8d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5xgwt" [a34d2ffa-3e06-49f6-a4ad-130ee5f46a8d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003681489s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-662609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-346100 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-346100 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m21.504603784s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-662609 "pgrep -a kubelet"
I0127 12:28:01.018110  478387 config.go:182] Loaded profile config "bridge-662609": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-662609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-p77xt" [75394f07-c3ac-4a36-93bc-a7ffe9d62e74] Pending
helpers_test.go:344: "netcat-5d86dc444-p77xt" [75394f07-c3ac-4a36-93bc-a7ffe9d62e74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003976446s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-662609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-662609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0127 12:36:28.384861  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:36:31.914156  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:36:47.586308  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:36:48.490548  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:36:59.614873  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:37:16.194378  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:37:25.460937  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:37:50.307089  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:37:53.160888  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:38:01.256452  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:38:28.955973  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:39:10.933992  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:39:56.083735  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:06.446432  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:27.459392  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:34.148502  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:41:19.884932  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:41:23.465054  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:41:31.914041  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:41:48.490488  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:42:25.460594  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:42:46.538640  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:43:01.255590  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:44:10.933985  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:44:56.083808  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:45:06.446993  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:45:27.459109  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:19.151309  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:19.884438  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:23.465266  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:31.914043  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:48.489983  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:50.523505  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:47:25.459646  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:47:42.948386  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:47:54.976881  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:01.255489  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:11.555876  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:48.522679  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:10.933632  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:24.317952  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:49:56.083910  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:50:06.446099  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:50:27.458596  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:19.884963  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:23.464906  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:29.510733  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:31.913994  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:48.490630  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:52:14.006520  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:52:25.459693  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:53:01.255582  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:54:10.934093  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:54:56.083257  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:55:06.446524  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:55:27.459015  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:56:19.884150  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:56:23.464954  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/functional-508115/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:56:31.914045  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:56:48.489883  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-887672 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-887672 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m11.273137885s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-215237 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e06fa028-0450-48ff-9e41-d8e2e321e4e1] Pending
helpers_test.go:344: "busybox" [e06fa028-0450-48ff-9e41-d8e2e321e4e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e06fa028-0450-48ff-9e41-d8e2e321e4e1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004593365s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-215237 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-215237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-215237 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-215237 --alsologtostderr -v=3
E0127 12:29:10.933777  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-215237 --alsologtostderr -v=3: (1m31.001676191s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-346100 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d5c6ff0-c897-46ec-b1d6-845d60586db2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4d5c6ff0-c897-46ec-b1d6-845d60586db2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003933079s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-346100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-346100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-346100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-346100 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-346100 --alsologtostderr -v=3: (1m31.005692744s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-887672 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ace336d7-75f1-4fe6-9207-c9149f2bf7f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ace336d7-75f1-4fe6-9207-c9149f2bf7f9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003920414s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-887672 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-887672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-887672 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-887672 --alsologtostderr -v=3
E0127 12:29:56.083144  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:56.090396  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:56.101796  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:56.123299  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:56.165109  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:56.247396  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:56.409314  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:56.732512  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:57.374700  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:58.656087  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:01.217838  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-887672 --alsologtostderr -v=3: (1m31.478054211s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-858845 create -f testdata/busybox.yaml
E0127 12:30:06.339937  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6995f2ff-b759-420e-9834-cf4962417570] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6995f2ff-b759-420e-9834-cf4962417570] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004499596s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-858845 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-858845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-858845 describe deploy/metrics-server -n kube-system
E0127 12:30:16.582025  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-858845 --alsologtostderr -v=3
E0127 12:30:27.459434  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:27.465864  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:27.477182  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:27.498542  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:27.540691  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:27.622208  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:27.783741  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:28.105664  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:28.747648  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:30.029217  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:32.591229  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:37.063357  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:37.712796  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-858845 --alsologtostderr -v=3: (1m31.16506044s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215237 -n no-preload-215237
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215237 -n no-preload-215237: exit status 7 (66.285527ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-215237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-346100 -n embed-certs-346100
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-346100 -n embed-certs-346100: exit status 7 (65.974243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-346100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-887672 -n default-k8s-diff-port-887672
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-887672 -n default-k8s-diff-port-887672: exit status 7 (67.697494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-887672 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-858845 -n old-k8s-version-858845
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-858845 -n old-k8s-version-858845: exit status 7 (80.177714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-858845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (165.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-858845 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0127 12:31:48.490370  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:48.496848  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:48.508275  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:48.529674  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:48.571101  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:48.652556  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:48.814486  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:49.136229  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:49.398282  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:49.778580  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:51.059887  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:52.406755  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:53.621803  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:58.744209  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:00.860892  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:08.986080  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:12.888711  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:25.459452  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:25.465812  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:25.477174  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:25.498563  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:25.539895  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:25.621325  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:25.782901  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:26.104207  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:26.746003  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:28.027867  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:29.467855  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:30.590096  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:35.711895  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:39.947113  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:41.822657  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:45.953926  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:53.850553  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:01.255617  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:01.262031  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:01.273465  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:01.294818  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:01.336217  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:01.417641  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:01.579208  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:01.900682  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:02.541932  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:03.823616  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:06.385690  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:06.436172  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:10.430053  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:11.320081  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:11.507638  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:21.749008  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:42.230623  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:47.398100  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:34:03.744268  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:34:10.933718  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:34:15.772781  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/custom-flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:34:23.191902  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:34:32.351794  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/enable-default-cni-662609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-858845 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m45.405902366s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-858845 -n old-k8s-version-858845
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (165.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lt46p" [892f50f9-4582-4d71-8ba6-6cf6a0bc1054] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003367949s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lt46p" [892f50f9-4582-4d71-8ba6-6cf6a0bc1054] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005208265s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-858845 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-858845 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-858845 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-858845 -n old-k8s-version-858845
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-858845 -n old-k8s-version-858845: exit status 2 (254.403712ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-858845 -n old-k8s-version-858845
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-858845 -n old-k8s-version-858845: exit status 2 (254.604946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-858845 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-858845 -n old-k8s-version-858845
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-858845 -n old-k8s-version-858845
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-610630 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 12:34:56.083278  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:06.446003  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:06.452462  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:06.463872  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:06.485310  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:06.526775  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:06.608048  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:06.769681  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:07.091587  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:07.733954  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:09.016068  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:09.319478  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/flannel-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:11.577684  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:16.699831  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:23.789416  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/auto-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:26.941614  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:27.458538  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:34.005049  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/addons-582557/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-610630 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (46.560359253s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-610630 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-610630 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-610630 --alsologtostderr -v=3: (6.603709649s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-610630 -n newest-cni-610630
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-610630 -n newest-cni-610630: exit status 7 (86.727696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-610630 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-610630 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 12:35:45.113720  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/bridge-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:47.423382  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/old-k8s-version-858845/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:55.161432  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:36:19.884794  478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/calico-662609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-610630 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (37.175551472s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-610630 -n newest-cni-610630
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-610630 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-610630 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-610630 -n newest-cni-610630
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-610630 -n newest-cni-610630: exit status 2 (252.002173ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-610630 -n newest-cni-610630
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-610630 -n newest-cni-610630: exit status 2 (240.959995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-610630 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-610630 -n newest-cni-610630
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-610630 -n newest-cni-610630
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    

Test skip (38/316)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
266 TestNetworkPlugins/group/kubenet 3.08
274 TestNetworkPlugins/group/cilium 3.4
280 TestStartStop/group/disable-driver-mounts 0.24
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-662609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-662609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:20:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.13:8443
name: cert-expiration-455827
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:22:38 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.68:8443
name: kubernetes-upgrade-570656
contexts:
- context:
cluster: cert-expiration-455827
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:20:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-455827
name: cert-expiration-455827
- context:
cluster: kubernetes-upgrade-570656
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:22:38 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: kubernetes-upgrade-570656
name: kubernetes-upgrade-570656
current-context: kubernetes-upgrade-570656
kind: Config
preferences: {}
users:
- name: cert-expiration-455827
user:
client-certificate: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/cert-expiration-455827/client.crt
client-key: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/cert-expiration-455827/client.key
- name: kubernetes-upgrade-570656
user:
client-certificate: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kubernetes-upgrade-570656/client.crt
client-key: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kubernetes-upgrade-570656/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-662609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-662609"

                                                
                                                
----------------------- debugLogs end: kubenet-662609 [took: 2.922826338s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-662609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-662609
--- SKIP: TestNetworkPlugins/group/kubenet (3.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-662609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-662609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:20:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.13:8443
name: cert-expiration-455827
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:22:38 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.68:8443
name: kubernetes-upgrade-570656
contexts:
- context:
cluster: cert-expiration-455827
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:20:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-455827
name: cert-expiration-455827
- context:
cluster: kubernetes-upgrade-570656
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:22:38 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: kubernetes-upgrade-570656
name: kubernetes-upgrade-570656
current-context: kubernetes-upgrade-570656
kind: Config
preferences: {}
users:
- name: cert-expiration-455827
user:
client-certificate: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/cert-expiration-455827/client.crt
client-key: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/cert-expiration-455827/client.key
- name: kubernetes-upgrade-570656
user:
client-certificate: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kubernetes-upgrade-570656/client.crt
client-key: /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kubernetes-upgrade-570656/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-662609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-662609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-662609"

                                                
                                                
----------------------- debugLogs end: cilium-662609 [took: 3.238965869s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-662609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-662609
--- SKIP: TestNetworkPlugins/group/cilium (3.40s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-416788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-416788
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard